Thread: Re: guaranteeing that a sequence never skips (fwd)
> On Sun, 2004-10-03 at 08:58, David Garamond wrote: > > Am I correct to assume that SERIAL does not guarantee that a sequence > > won't skip (e.g. one successful INSERT gets 32 and the next might be 34)? > > > > Sometimes a business requirement is that a serial sequence never skips, > > e.g. when generating invoice/ticket/formal letter numbers. Would an > > INSERT INTO t (id, ...) VALUES (SELECT MAX(col)+1 FROM t, ...) suffice, > > or must I install a trigger too to do additional checking? > > You will have to lock the whole table and your parallel performance will > be poor. Locking the table isn't sufficient to guarantee that a sequence value never skips. What if a transaction fails and has to be rolled back? I've written database systems that used pre-numbered checks, what's usually necessary is to postpone the check-numbering phase until the number of checks is finalized, so that there's not much chance of anything else causing a rollback. -- Mike Nolan
On Sun, 2004-10-03 at 11:48, Mike Nolan wrote: > > On Sun, 2004-10-03 at 08:58, David Garamond wrote: > > > Am I correct to assume that SERIAL does not guarantee that a sequence > > > won't skip (e.g. one successful INSERT gets 32 and the next might be 34)? > > > > > > Sometimes a business requirement is that a serial sequence never skips, > > > e.g. when generating invoice/ticket/formal letter numbers. Would an > > > INSERT INTO t (id, ...) VALUES (SELECT MAX(col)+1 FROM t, ...) suffice, > > > or must I install a trigger too to do additional checking? > > > > You will have to lock the whole table and your parallel performance will > > be poor. > > Locking the table isn't sufficient to guarantee that a sequence value > never skips. What if a transaction fails and has to be rolled back? > > I've written database systems that used pre-numbered checks, what's usually > necessary is to postpone the check-numbering phase until the number of > checks is finalized, so that there's not much chance of anything else > causing a rollback. > -- I didn't mean to use a sequence, sorry for being vague. I meant this: lock table select max(idfield)+1 insert new row disconnect.
In an attempt to throw the authorities off his trail, smarlowe@qwest.net ("Scott Marlowe") transmitted: > On Sun, 2004-10-03 at 11:48, Mike Nolan wrote: >> > On Sun, 2004-10-03 at 08:58, David Garamond wrote: >> > > Am I correct to assume that SERIAL does not guarantee that a sequence >> > > won't skip (e.g. one successful INSERT gets 32 and the next might be 34)? >> > > >> > > Sometimes a business requirement is that a serial sequence never skips, >> > > e.g. when generating invoice/ticket/formal letter numbers. Would an >> > > INSERT INTO t (id, ...) VALUES (SELECT MAX(col)+1 FROM t, ...) suffice, >> > > or must I install a trigger too to do additional checking? >> > >> > You will have to lock the whole table and your parallel performance will >> > be poor. >> >> Locking the table isn't sufficient to guarantee that a sequence value >> never skips. What if a transaction fails and has to be rolled back? >> >> I've written database systems that used pre-numbered checks, what's usually >> necessary is to postpone the check-numbering phase until the number of >> checks is finalized, so that there's not much chance of anything else >> causing a rollback. >> -- > > I didn't mean to use a sequence, sorry for being vague. I meant this: > > lock table > select max(idfield)+1 > insert new row > disconnect. Yeah, that'll work, so long as you're prepared to wait for the table to be available. I think I like my idea of putting in provisional values, and then fixing them up later... You could do this via a sequence thus: select setval('ourseq', 250000000); -- Make sure the sequence starts -- way high create index idf_250m on thistable(idfield) where idfield > 250000000; -- Provide an efficient way to look up the entries that need -- to get reset Then, every once in a while, a separate process would go in, see the highest value on idfield < 250M, and rewrite the idfield on all of the tuples where idfield > 250M. It would be efficient due to the partial index. It limits the number of documents to 250M, but I'm sure that can be alleviated when it turns into an issue... -- output = reverse("gro.mca" "@" "enworbbc") http://linuxfinances.info/info/nonrdbms.html Would I be an optimist or a pessimist if I said my bladder was half full?
> Then, every once in a while, a separate process would go in, see the > highest value on idfield < 250M, and rewrite the idfield on all of the > tuples where idfield > 250M. It would be efficient due to the partial > index. It limits the number of documents to 250M, but I'm sure that > can be alleviated when it turns into an issue... I think you'd be better off using two columns. Call the first one the 'work ticket' for the check request, and you don't really care if it has gaps in it or not, its primary purpose is to ensure that each check request has a unique document number of some kind, so a sequence works fine. One and only one program assigns the actual check numbers--in a separate column. That's the sort of thing that most commercial packages do, even though it seems clumsy and adds an extra step, and that's why they do it that way, too. -- Mike Nolan