Re: 4 billion record limit? - Mailing list pgsql-general

From Tom Lane
Subject Re: 4 billion record limit?
Date
Msg-id 4007.964503877@sss.pgh.pa.us
Whole thread Raw
In response to Re: 4 billion record limit?  (Chris Bitmead <chrisb@nimrod.itg.telstra.com.au>)
List pgsql-general
Chris Bitmead <chrisb@nimrod.itg.telstra.com.au> writes:
> Paul Caskey wrote:
>> 1. This implies a hard limit of 4 billion records on a server, right?

> Basically, yes.

It's only a hard limit if your application assumes OIDs are unique.
If you don't assume that, then I think it's not a big problem.

It's possible (though obviously not especially likely) that you might
get OID collisions in the system tables after an OID-counter wraparound.
I believe that in 7.0, any such collision will result in a "duplicate
key" error, because we have unique indexes on OID for all the system
tables where it matters.  So worst case is you'd have to retry a table
creation or whatever the command was.  This could be annoying but it's
certainly not catastrophic.

> The real solution is to support 64 bit oids, but that has not been done
> yet.

64-bit OIDs are probably the Final Solution here, but we're not in a
big hurry to do that because of the space cost and portability issues.
Another idea that's been discussed is to make OID column optional in
user tables --- then, simply not using OIDs in your large tables would
be enough to make the 32-bit OID supply last indefinitely.

As long as we're talking about outer limits, it might be worth pointing
out that block numbers within tables are 32 bits.  With the default
blocksize of 8K, that sets a hard limit of either 16Tb or 32Tb per
table, depending on whether block-number arithmetic is unsigned or not
(I'm not sure).  You could push that to 64/128Tb by using BLCKSZ=32K.
If that won't do, maybe you need a different DBMS ;-)

            regards, tom lane

pgsql-general by date:

Previous
From: Thomas Lockhart
Date:
Subject: Re: Only updating part of a column
Next
From: jprem
Date:
Subject: nested transactions