Thread: guaranteeing that a sequence never skips
Am I correct to assume that SERIAL does not guarantee that a sequence won't skip (e.g. one successful INSERT gets 32 and the next might be 34)? Sometimes a business requirement is that a serial sequence never skips, e.g. when generating invoice/ticket/formal letter numbers. Would an INSERT INTO t (id, ...) VALUES (SELECT MAX(col)+1 FROM t, ...) suffice, or must I install a trigger too to do additional checking? -- dave
On Sun, 2004-10-03 at 08:58, David Garamond wrote: > Am I correct to assume that SERIAL does not guarantee that a sequence > won't skip (e.g. one successful INSERT gets 32 and the next might be 34)? > > Sometimes a business requirement is that a serial sequence never skips, > e.g. when generating invoice/ticket/formal letter numbers. Would an > INSERT INTO t (id, ...) VALUES (SELECT MAX(col)+1 FROM t, ...) suffice, > or must I install a trigger too to do additional checking? You will have to lock the whole table and your parallel performance will be poor.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Sunday 03 October 2004 10:21 am, Scott Marlowe wrote: > On Sun, 2004-10-03 at 08:58, David Garamond wrote: > > Am I correct to assume that SERIAL does not guarantee that a sequence > > won't skip (e.g. one successful INSERT gets 32 and the next might be 34)? > > > > Sometimes a business requirement is that a serial sequence never skips, > > e.g. when generating invoice/ticket/formal letter numbers. Would an > > INSERT INTO t (id, ...) VALUES (SELECT MAX(col)+1 FROM t, ...) suffice, > > or must I install a trigger too to do additional checking? > > You will have to lock the whole table and your parallel performance will > be poor. > There was a thread about this a while back. I'm using a separate counter table and stored procs that increment the value of the counter - similar to nextval used for sequences. My "nextval" locks the "counterrow" in question using "...for update". So while I'm generating the record that requires the sequential number I'm in the same stored proc and therefor in a transaction. If I have to roll back, the counter number in the countertable will roll back too. You just have to make sure your routine to completely generate whatever you have to generate doesn't take long, because parallel uses of the same thing will block until your proc commits or rolls back. UC - -- Open Source Solutions 4U, LLC 2570 Fleetwood Drive Phone: +1 650 872 2425 San Bruno, CA 94066 Cell: +1 650 302 2405 United States Fax: +1 650 872 2417 -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.3 (GNU/Linux) iD8DBQFBYD6KjqGXBvRToM4RAgFOAKCeJnwA6PnXquCrUMwGbR9tQZBxdgCdGqyy nwNbHafAiInSX+WTh5Uzb4o= =Uixo -----END PGP SIGNATURE-----
A long time ago, in a galaxy far, far away, lists@zara.6.isreserved.com (David Garamond) wrote: > Am I correct to assume that SERIAL does not guarantee that a sequence > won't skip (e.g. one successful INSERT gets 32 and the next might be > 34)? What is guaranteed is that sequence values will not be repeated (assuming you don't do a setval() :-).) If value caching is turned on, then each connection may grab them in groups of (say) 100, so that one insert, on one (not-too-busy) connection might add in 5399, and an insert on another connection, that has been much busier, might add in 6522, and those values differ quite a bit :-). > Sometimes a business requirement is that a serial sequence never > skips, e.g. when generating invoice/ticket/formal letter > numbers. Would an INSERT INTO t (id, ...) VALUES (SELECT MAX(col)+1 > FROM t, ...) suffice, or must I install a trigger too to do > additional checking? This is a troublesome scenario... 1. Your requirement makes it MUCH harder to deal with concurrent updates efficiently. That "SELECT MAX()" destroys the efficiency achieved by the use of sequences. 2. It may be difficult to avoid deadlocks of some sort. Suppose several inserts take place more or less simultaneously. In that case, they might all get the same value of SELECT MAX(), and only one of them could therefore succeed. The others would get "clotheslined" by the UNIQUE constraint, like a hapless fugitive that runs into a tree branch, and you'll see transactions failing due to concurrency. Not a good thing. Another possibiity would be to have _two_ fields, one, call it C1, using a sequence, and the other, C2, which gets populated later. Periodically, a process goes through and calculates CURR=SELECT MAX(C2), and then walks through all of the records populated with values in C1. For each non-null C1, it assigns C2 based on the value of CURR, and then empties C1. That means that there is a period of time during which the "ultimate" sequence value, C2, is not not populated, which might or might not be a problem for your application. -- (format nil "~S@~S" "cbbrowne" "linuxfinances.info") http://linuxfinances.info/info/linuxxian.html Life's a duck, and then you sigh.
>Sometimes a business requirement is that a serial sequence >never skips, >e.g. when generating invoice/ticket/formal letter numbers. Would an >INSERT INTO t (id, ...) VALUES (SELECT MAX(col)+1 FROM t, ...) >suffice, >or must I install a trigger too to do additional checking? > If id is defined unique it should be ok but if two of those statments happen to run concurrently you could get duplicate key violations and be prepared to retry the transaction. Savepoints in 8.0 will probably come on handy. Of course concurrency is determined by the length of time you insert a new value and commit. So it would help if you had all other values/statements ready/done by the time you insert into this table, and can do commit straight after this statement. Andre