Re: Re: Re: is PG able to handle a >500 GB Database? - Mailing list pgsql-general

From Tom Lane
Subject Re: Re: Re: is PG able to handle a >500 GB Database?
Date
Msg-id 1672.980008957@sss.pgh.pa.us
Whole thread Raw
In response to Re: Re: Re: is PG able to handle a >500 GB Database?  ("Brett W. McCoy" <bmccoy@chapelperilous.net>)
List pgsql-general
"Brett W. McCoy" <bmccoy@chapelperilous.net> writes:
>> last_value will return whatever value was last assigned
>> by any backend, therefore you might not get the value that was inserted
>> into your tuple, but someone else's.

> In that case you would call next_val *before* you insert and use that
> value in the INSERT statement.

Yup, that works too.  Which one you use is a matter of style, I think.
(Actually I prefer the nextval-first approach myself, just because it
seems simpler and more obviously correct.  But currval-after does work.)

To bring this discussion back to the original topic: sequences are also
4-byte counters, at present.  But there's still some value in using a
sequence to label rows in a huge table, rather than OIDs.  Namely, you
can use a separate sequence for each large table.  That way, you only
get into trouble when you exceed 4G rows entered into a particular
table, not 4G rows created in the entire database cluster.

            regards, tom lane

pgsql-general by date:

Previous
From: "Brett W. McCoy"
Date:
Subject: Re: Re: Re: is PG able to handle a >500 GB Database?
Next
From: "rob"
Date:
Subject: currval was (Re: Re: Re: is PG able to handle a >500 GB Database? )