Re: Auto incrementing primary keys - Mailing list pgsql-general

From Gordon
Subject Re: Auto incrementing primary keys
Date
Msg-id 867702b7-d958-4707-bedb-4db900b96565@64g2000hsw.googlegroups.com
Whole thread Raw
In response to Re: Auto incrementing primary keys  (pgsql_user <amaltasb@gmail.com>)
Responses Re: Auto incrementing primary keys
List pgsql-general
On Feb 18, 1:14 pm, pgsql_user <amalt...@gmail.com> wrote:
> On Feb 18, 6:08 pm, Paul Boddie <p...@boddie.org.uk> wrote:
>
>
>
> > On 18 Feb, 13:36, django_user <amalt...@gmail.com> wrote:
>
> > > How can stop postgresql from incrementing the primary key value, so
> > > that even after many failed insert statements it get the next id val.
>
> > "Auto-incrementing" columns, typically implemented using the serial
> > data type [1], employ sequences.
>
> > From the manual:
>
> > "To avoid blocking of concurrent transactions that obtain numbers from
> > the same sequence, a nextval operation is never rolled back; that is,
> > once a value has been fetched it is considered used, even if the
> > transaction that did the nextval later aborts. This means that aborted
> > transactions may leave unused "holes" in the sequence of assigned
> > values. setval operations are never rolled back, either."
>
> >http://www.postgresql.org/docs/8.1/interactive/functions-sequence.html
>
> > In other words, to permit a decent level of concurrency, PostgreSQL
> > doesn't wait to see if a transaction succeeds with a value from a
> > sequence before updating the sequence. If you want to reset a sequence
> > so that it always uses the next unused value as determined by looking
> > at the table, I suppose you could do something like this:
>
> > select setval('mytable_id_seq', x) from (select max(id) as x from
> > mytable) as y;
>
> > But I doubt that you would want to do this too often in any system
> > with any reasonable level of concurrent access to the table or the
> > sequence concerned.
>
> > Paul
>
> > [1]http://www.postgresql.org/docs/8.1/interactive/datatype.html#DATATYPE...
>
> so wouldnt I run out of ids one day, if there are lot of failed insert
> statements, lets say for every successful insert there are 50
> unsuccessful inserts, so ids would be 1, 50, 100, and once I have
> thousands of rows, I will run out of IDs ? should I use bigserial
> instead ?
>
> Thanks

In theory, yes.  but the standard 4 byte integer can represent about 2
billion positive numbers so even with a lot of failed inserts you're
probably not going to run out for years.

pgsql-general by date:

Previous
From: "Markus Bertheau"
Date:
Subject: Re: out-of-line (TOAST) storage ineffective when loading from dump?
Next
From: "Dave Page"
Date:
Subject: Re: Pains in upgrading to 8.3