Re: is PG able to handle a >500 GB Database? - Mailing list pgsql-general

From Tom Lane
Subject Re: is PG able to handle a >500 GB Database?
Date
Msg-id 6623.979874838@sss.pgh.pa.us
Whole thread Raw
In response to Re: is PG able to handle a >500 GB Database?  (Alvar Freude <alvar.freude@merz-akademie.de>)
Responses Re: is PG able to handle a >500 GB Database?
Re: is PG able to handle a >500 GB Database?
List pgsql-general
Alvar Freude <alvar.freude@merz-akademie.de> writes:
> Tom Lane schrieb:
>> You'd probably find the OID counter wrapping around before too long.
>> However, as long as you don't assume that OIDs are unique in your data
>> tables, that shouldn't bother you a whole lot.  AFAIK you should be
>> able to make it work.

> ok -- the OID-Counter is (still) 4-byte int?

Right.

> As far as I guess now I don't think I need an unique OID -- if Postgres
> didn't need it!

Unless your application logic tries to use OIDs as row identifiers,
duplicate OIDs in user tables are not a problem.

The system does assume that OIDs are unique within certain system tables
--- for example, two tables (pg_class rows) can't have the same OID.
This is enforced by unique indexes on those tables, however.  If you
were really unlucky, then after OID wraparound you might see "can't
insert duplicate key" failures while doing create table or some such.
This could be dealt with just by retrying till it works, since each
try will generate new OIDs.  But the odds of a conflict are pretty tiny,
so I mention this mainly for completeness.

We do have a TODO item to allow OID to be 8-byte, but in the real world
I doubt it's a big deal.

I am more concerned about the 4-byte transaction ID generator ---
wraparound of that counter would be much nastier.  Don't insert those
billion rows in a billion separate transactions ;-)

            regards, tom lane

pgsql-general by date:

Previous
From: Alvar Freude
Date:
Subject: Re: is PG able to handle a >500 GB Database?
Next
From: Konstantinos Agouros
Date:
Subject: Performance of a single (big) select and Multiprocessor Machines