OK, I'll wake up now.
Transaction 2 will actually wait for transaction 1 to complete, before
updating "seed".
But that drops you on the other horn of the dilemma, i.e. loss of
concurrency (partial at least).
That may not matter for your application, but for very high traffic, this
will become a problem.
> -----Original Message-----
> From: Jeff Eckermann
> Sent: Thursday, April 05, 2001 10:36 AM
> To: 'Gerald Gutierrez'; 'Andrew Perrin'
> Cc: PgSQL-SQL
> Subject: RE: [SQL] RE: serial type; race conditions
>
> OK:
> * Transaction 1 commences, sets seed to seed + 1
> * Transaction 2 commences, sets seed to seed + 1
> * Transaction 1 inserts into some_table, selects seed (sequence now
> has a hole)
> * Transaction 2 inserts into some_table, selects seed (same value as
> just used by transaction 1)
> * The second transaction to commit will either create a duplicate
> "seed" value, or roll back because of a unique constraint, still leaving a
> hole in the sequence.
>
> -----Original Message-----
> From: Gerald Gutierrez [SMTP:gutz@kalador.com]
> Sent: Wednesday, April 04, 2001 7:24 PM
> To: Jeff Eckermann; 'Andrew Perrin'
> Cc: PgSQL-SQL
> Subject: RE: [SQL] RE: serial type; race conditions
>
>
> It seems to just feel like conflicting requirements, so it's a tug-of-war.
>
> I've always done it by doing all the processing I can and then, from
> inside
> a transaction, do
>
> update seed from seed_table set seed=seed+1 where id='abc';
> insert into some_table values ((select seed from seed_table where
> id='abc'),
> other_stuff);
>
> The processing would be concurrent and only the update & insert would be
> "serialized". It would be portable and shouldn't contain holes, but is
> slower than sequences.
>
>
> Gerald.
>
>
> -----Original Message-----
> From: pgsql-sql-owner@postgresql.org
> [mailto:pgsql-sql-owner@postgresql.org]On Behalf Of Jeff Eckermann
> Sent: Thursday, March 29, 2001 10:48 AM
> To: 'Andrew Perrin'
> Cc: PgSQL-SQL
> Subject: [SQL] RE: serial type; race conditions
>
>
> Probably just me: but I don't see the point. Consider:
> * User 1 commences insert transaction: grabs nextval(sequence),
> max(foo)
> * User 2 commences insert transaction: grabs nextval(sequence),
> max(foo)
> * User 1 commits
> * User 2 commits (insert has sequence value one higher than for User
> 1, but same value for max(foo) + 1), or
> * If foo has a unique constraint, transaction 2 will roll back.
>
> Either way, I don't see what has been gained. All of the messages I have
> read on this subject conclude with the same point: choice is to:
> * accept unique sequence with holes
> * accept loss of concurrency (as in the example above).
>
> Or am I just missing the point?
>
> > -----Original Message-----
> > From: Andrew Perrin [SMTP:aperrin@socrates.berkeley.edu]
> > Sent: Thursday, March 29, 2001 8:46 AM
> > To: pgsql@symcom.com
> > Cc: PgSQL-SQL
> > Subject: Re: serial type; race conditions
> >
> > I ditto what Bruce said - trying to get a true sequence without gaps is
> a
> > losing battle. Why don't you, instead, use a serial column as the real
> > sequence, and then a trigger that simply inserts max(foo) + 1 in a
> > different column? Then when you need to know the column, do something
> > like:
> >
> > SELECT number_i_care_about FROM table WHERE serial_number =
> > currval('serial_number_seq');
> >
> > ap
> >
> > ----------------------------------------------------------------------
> > Andrew J Perrin - Ph.D. Candidate, UC Berkeley, Dept. of Sociology
> > (Soon: Asst Professor of Sociology, U of North Carolina, Chapel Hill)
> > andrew_perrin@unc.edu - http://www.unc.edu/~aperrin
> >
> > On Thu, 29 Mar 2001, Bruce Momjian wrote:
> >
> > > > How does currval work if you are not inside a transaction. I have
> > > > been experimenting with inserting into a table that has a sequence.
> > > > If the insert fails (not using a transaction) because of bad client
> > input
> > > > then the next insert gets the proper next number in the sequence.
> > >
> > > If you are in a transaction, and the INSERT succeeds but the
> transaction
> > > rolls back, the sequence does not get reused. Each backend has a
> local
> > > variable that holds the most recent sequence assigned. That is how
> > > currval works.
> > >
> > > >
> > > > given sequence 1,2,3,4,5 exists
> > > > insert into table date 1/111/01 (obviously wrong) insert fails...
> > > > try again with good data, insert succeeds and gets number 6 in the
> > > > sequence.
> > > >
> > > > i'm getting what I want. A sequence number that does not increment
> > > > on a failed insert. However, how do I get the assigned sequence
> > > > number with currval when I am not using a transaction? What
> > > > happens when multiple users are inserting at the same time?
> > > >
> > > > I am trying to create a sequence with out any "missing" numbers. If
> > > > there is a failure to insert, and a sequence number is "taken". I
> want
> >
> > > > the empty row.
> > > >
> > > > Thanks, .... it is getting clearer....
> > >
> > > You really can't use sequences with no gaps. Sequence numbers are not
> > > _held_ until commit because it would block other backends trying to
> get
> > > sequence numbers.
> > >
> > > --
> > > Bruce Momjian | http://candle.pha.pa.us
> > > pgman@candle.pha.pa.us | (610) 853-3000
> > > + If your life is a hard drive, | 830 Blythe Avenue
> > > + Christ can be your backup. | Drexel Hill, Pennsylvania
> > 19026
> > >
> > > ---------------------------(end of
> broadcast)---------------------------
> > > TIP 2: you can get off all lists at once with the unregister command
> > > (send "unregister YourEmailAddressHere" to
> majordomo@postgresql.org)
> > >
> >
> >
> > ---------------------------(end of broadcast)---------------------------
> > TIP 2: you can get off all lists at once with the unregister command
> > (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)
>
> ---------------------------(end of broadcast)---------------------------
> TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to majordomo@postgresql.org so that your
> message can get through to the mailing list cleanly