Re: guaranteeing that a sequence never skips (fwd) - Mailing list pgsql-general

From Christopher Browne
Subject Re: guaranteeing that a sequence never skips (fwd)
Date
Msg-id m3hdpb4o5k.fsf@wolfe.cbbrowne.com
Whole thread Raw
In response to Re: guaranteeing that a sequence never skips (fwd)  (Mike Nolan <nolan@gw.tssi.com>)
Responses Re: guaranteeing that a sequence never skips (fwd)
List pgsql-general
In an attempt to throw the authorities off his trail, smarlowe@qwest.net ("Scott Marlowe") transmitted:
> On Sun, 2004-10-03 at 11:48, Mike Nolan wrote:
>> > On Sun, 2004-10-03 at 08:58, David Garamond wrote:
>> > > Am I correct to assume that SERIAL does not guarantee that a sequence
>> > > won't skip (e.g. one successful INSERT gets 32 and the next might be 34)?
>> > >
>> > > Sometimes a business requirement is that a serial sequence never skips,
>> > > e.g. when generating invoice/ticket/formal letter numbers. Would an
>> > > INSERT INTO t (id, ...) VALUES (SELECT MAX(col)+1 FROM t, ...) suffice,
>> > > or must I install a trigger too to do additional checking?
>> >
>> > You will have to lock the whole table and your parallel performance will
>> > be poor.
>>
>> Locking the table isn't sufficient to guarantee that a sequence value
>> never skips.  What if a transaction fails and has to be rolled back?
>>
>> I've written database systems that used pre-numbered checks, what's usually
>> necessary is to postpone the check-numbering phase until the number of
>> checks is finalized, so that there's not much chance of anything else
>> causing a rollback.
>> --
>
> I didn't mean to use a sequence, sorry for being vague. I meant this:
>
> lock table
> select max(idfield)+1
> insert new row
> disconnect.

Yeah, that'll work, so long as you're prepared to wait for the table
to be available.

I think I like my idea of putting in provisional values, and then
fixing them up later...

You could do this via a sequence thus:

 select setval('ourseq', 250000000);  -- Make sure the sequence starts
                                      -- way high

 create index idf_250m on thistable(idfield) where idfield > 250000000;
  -- Provide an efficient way to look up the entries that need
  -- to get reset

Then, every once in a while, a separate process would go in, see the
highest value on idfield < 250M, and rewrite the idfield on all of the
tuples where idfield > 250M.  It would be efficient due to the partial
index.  It limits the number of documents to 250M, but I'm sure that
can be alleviated when it turns into an issue...
--
output = reverse("gro.mca" "@" "enworbbc")
http://linuxfinances.info/info/nonrdbms.html
Would I be  an optimist or a  pessimist if I said my  bladder was half
full?

pgsql-general by date:

Previous
From: Christopher Browne
Date:
Subject: Re: guaranteeing that a sequence never skips
Next
From: Scott Ribe
Date:
Subject: Re: Out of memory errors on OS X