We have a problem with our PostgreSQL application.We are trying to decide if this is a major problem with Postgres, or is due to our application design.
Our application generates thousands oftemp tables and data is moved and sorted several times in each batched insertion of data.
We chew though OIDs very quickly. At our customers site, the OID count is exceeding 4 billion and wrapping every two weeks (approx.).
We have found this to be a major problem. It seems once the OIDs wrap; we constantly get errors due to "Cannot insert a duplicate key into unique index pg_class_oid_index". There are about 3,000 entries in pg_class at this stage.As most of the tables are dynamically generated, a failure means lost data.
I am in the middle of designing an application change as a stop-gap fix:
Change all creation of temp tables with "without oids", hoping to reduce the consumption of OIDS
Check for the error string "Cannot insert a duplicate key into unique index pg_class_oid_index" in the batch shells for each function call and re-run if required.
Remove all truncate statements from processing
Trigger dump and restore of the database during nightly processing if the error occurs.
Can anyone comment if they know this is a fundamental limitation of PostgreSQL and if other databases have addressed this problem?Or alternatively, is it a problem with our application that must be corrected?
If anyone can instruct me on a better way to handle this, it would be appreciated.
We are running PostgreSQL 7.2.1 on Pentium 4 x86 type systems.
Thanks
Andrew Bartley
Evolvo Systems Pty Ltd. Level 3, 351-353 Elizabeth Street Melbourne, VIC, 3000 andrewb@evolvosystems.com tel. +613 9642 3200 fax. +613 9642 4108 mob. +613 0414 520 920