Thread: Max OID
I have a table that is used to control documents:
recno | integer | not null default nextval('document_recno_seq'::regclass)
foreigntablename | character varying(255) | not null
foreignrecno | integer | not null
docforeigntablename | character varying(255) | not null
docforeignrecno | integer | not null
docforeigntext | character varying(255) | not null
documenttyperecno | integer | not null
version | character varying(20) | not null
loadeddate | date | not null
loadedtime | time without time zone | not null
usersrecno | integer | not null
title | character varying(40) | not null
description | character varying(255) |
doculookupcodesrecno | integer | not null
document | oid | not null
suffix | character varying(255) |
Each document is loaded using the large object commands and the OID of the load then inserted as a ref number on the document table.
This has been working fine, but I had a major crash on the server and it had to be rebuilt.
The database had been exported (pg_dump) - so I reimported (pg_restore) the database.
If I now look at the oid (ie document.document) of the blob on the numbers are in the range:
3159553408
3159553409
3159553410
3159553411
3159553412
3159553413
3159553414
3159553415
these numbers are above the field I use for the OID in the code (ie an Integer - catering for 2 billion) - so the program no longer finds the document to export.
Any ideas as to why did these OID's become so large after reimporting (I am assuming here that they must have been under 2 billion before or else the process could not have worked before)? And what limit is ther then on this OID?
recno | integer | not null default nextval('document_recno_seq'::regclass)
foreigntablename | character varying(255) | not null
foreignrecno | integer | not null
docforeigntablename | character varying(255) | not null
docforeignrecno | integer | not null
docforeigntext | character varying(255) | not null
documenttyperecno | integer | not null
version | character varying(20) | not null
loadeddate | date | not null
loadedtime | time without time zone | not null
usersrecno | integer | not null
title | character varying(40) | not null
description | character varying(255) |
doculookupcodesrecno | integer | not null
document | oid | not null
suffix | character varying(255) |
Each document is loaded using the large object commands and the OID of the load then inserted as a ref number on the document table.
This has been working fine, but I had a major crash on the server and it had to be rebuilt.
The database had been exported (pg_dump) - so I reimported (pg_restore) the database.
If I now look at the oid (ie document.document) of the blob on the numbers are in the range:
3159553408
3159553409
3159553410
3159553411
3159553412
3159553413
3159553414
3159553415
these numbers are above the field I use for the OID in the code (ie an Integer - catering for 2 billion) - so the program no longer finds the document to export.
Any ideas as to why did these OID's become so large after reimporting (I am assuming here that they must have been under 2 billion before or else the process could not have worked before)? And what limit is ther then on this OID?
See below.
I'm at a loss here.
I've checked the lo_import / lo_export and I think that I am doing all this correctly.
The original import uses the server side lo_import in an sql statement. The large object appears to have been created ok and the oid value inserted into the document field below. All good so far. The users could then get the large object out - this used a call to psql and a \lo_export to get that OID referred to by the document field. This has been working fine for 2-3 years.
On saturday the live server crashed and was replaced with the backup server. The live data was pg_dump'd (from an 8.0.3 database) prior to the crash and pg_restored (into 8.0.8) onto the backup server. Again all seemed well - no errors reported into the restore and the data looked good. BUT the document field now contains values of >2billion. Is it possible that the dump or the restore 'altered' not only the document field, but also the actual OID of the large object? The data all looks good, in as much as say the document field points to large object 3123456789 and large object oid 3123456789 is correct (ie it does tie up to that record). The problem being is that the program that handles the exporting of the object has never handled OID's bigger than 2billion - so the document field and its related large object would have to hve been <=2billion prior to the dump/restore. Does that make sense and is that feasible?
I have also restored the data into an 8.1.10 database and I see the same results, ie the document field contains values only >3billion - but they do seem to correctly correlate to the large objects as retrieved by \lo_export.
On Mon, 2008-09-01 at 11:01 +0100, Steve T wrote:
I'm at a loss here.
I've checked the lo_import / lo_export and I think that I am doing all this correctly.
The original import uses the server side lo_import in an sql statement. The large object appears to have been created ok and the oid value inserted into the document field below. All good so far. The users could then get the large object out - this used a call to psql and a \lo_export to get that OID referred to by the document field. This has been working fine for 2-3 years.
On saturday the live server crashed and was replaced with the backup server. The live data was pg_dump'd (from an 8.0.3 database) prior to the crash and pg_restored (into 8.0.8) onto the backup server. Again all seemed well - no errors reported into the restore and the data looked good. BUT the document field now contains values of >2billion. Is it possible that the dump or the restore 'altered' not only the document field, but also the actual OID of the large object? The data all looks good, in as much as say the document field points to large object 3123456789 and large object oid 3123456789 is correct (ie it does tie up to that record). The problem being is that the program that handles the exporting of the object has never handled OID's bigger than 2billion - so the document field and its related large object would have to hve been <=2billion prior to the dump/restore. Does that make sense and is that feasible?
I have also restored the data into an 8.1.10 database and I see the same results, ie the document field contains values only >3billion - but they do seem to correctly correlate to the large objects as retrieved by \lo_export.
On Mon, 2008-09-01 at 11:01 +0100, Steve T wrote:
I have a table that is used to control documents:
recno | integer | not null default nextval('document_recno_seq'::regclass)
foreigntablename | character varying(255) | not null
foreignrecno | integer | not null
docforeigntablename | character varying(255) | not null
docforeignrecno | integer | not null
docforeigntext | character varying(255) | not null
documenttyperecno | integer | not null
version | character varying(20) | not null
loadeddate | date | not null
loadedtime | time without time zone | not null
usersrecno | integer | not null
title | character varying(40) | not null
description | character varying(255) |
doculookupcodesrecno | integer | not null
document | oid | not null
suffix | character varying(255) |
Each document is loaded using the large object commands and the OID of the load then inserted as a ref number on the document table.
This has been working fine, but I had a major crash on the server and it had to be rebuilt.
The database had been exported (pg_dump) - so I reimported (pg_restore) the database.
If I now look at the oid (ie document.document) of the blob on the numbers are in the range:
3159553408
3159553409
3159553410
3159553411
3159553412
3159553413
3159553414
3159553415
these numbers are above the field I use for the OID in the code (ie an Integer - catering for 2 billion) - so the program no longer finds the document to export.
Any ideas as to why did these OID's become so large after reimporting (I am assuming here that they must have been under 2 billion before or else the process could not have worked before)? And what limit is ther then on this OID?
All,
Does anyone have an answer to this?
I had thought that by creating a new instance (using initdb) and then pg_restoring the dump there that it would reset the OIDs, but this seems arbitrary.
If I have a set of large objects and I have a pointer on a table to that object, what happens to that pointer when the database is dumped and restored? Is there a way I can force the restored OID to be < 2Billion?
I had expected the OID's to remain static - ie I load an object and it is given OID 1. I then put the value of that OID pointer on a table - so that field also now shows 1. I then dump that database and restore it elsewhere. I expected the OID and the field pointer to have the same value in the restored database, but that doesn't appear to be the case.
On Mon, 2008-09-01 at 14:54 +0100, Steve T wrote:
Does anyone have an answer to this?
I had thought that by creating a new instance (using initdb) and then pg_restoring the dump there that it would reset the OIDs, but this seems arbitrary.
If I have a set of large objects and I have a pointer on a table to that object, what happens to that pointer when the database is dumped and restored? Is there a way I can force the restored OID to be < 2Billion?
I had expected the OID's to remain static - ie I load an object and it is given OID 1. I then put the value of that OID pointer on a table - so that field also now shows 1. I then dump that database and restore it elsewhere. I expected the OID and the field pointer to have the same value in the restored database, but that doesn't appear to be the case.
On Mon, 2008-09-01 at 14:54 +0100, Steve T wrote:
See below.
I'm at a loss here.
I've checked the lo_import / lo_export and I think that I am doing all this correctly.
The original import uses the server side lo_import in an sql statement. The large object appears to have been created ok and the oid value inserted into the document field below. All good so far. The users could then get the large object out - this used a call to psql and a \lo_export to get that OID referred to by the document field. This has been working fine for 2-3 years.
On saturday the live server crashed and was replaced with the backup server. The live data was pg_dump'd (from an 8.0.3 database) prior to the crash and pg_restored (into 8.0.8) onto the backup server. Again all seemed well - no errors reported into the restore and the data looked good. BUT the document field now contains values of >2billion. Is it possible that the dump or the restore 'altered' not only the document field, but also the actual OID of the large object? The data all looks good, in as much as say the document field points to large object 3123456789 and large object oid 3123456789 is correct (ie it does tie up to that record). The problem being is that the program that handles the exporting of the object has never handled OID's bigger than 2billion - so the document field and its related large object would have to hve been <=2billion prior to the dump/restore. Does that make sense and is that feasible?
I have also restored the data into an 8.1.10 database and I see the same results, ie the document field contains values only >3billion - but they do seem to correctly correlate to the large objects as retrieved by \lo_export.
On Mon, 2008-09-01 at 11:01 +0100, Steve T wrote:I have a table that is used to control documents:
recno | integer | not null default nextval('document_recno_seq'::regclass)
foreigntablename | character varying(255) | not null
foreignrecno | integer | not null
docforeigntablename | character varying(255) | not null
docforeignrecno | integer | not null
docforeigntext | character varying(255) | not null
documenttyperecno | integer | not null
version | character varying(20) | not null
loadeddate | date | not null
loadedtime | time without time zone | not null
usersrecno | integer | not null
title | character varying(40) | not null
description | character varying(255) |
doculookupcodesrecno | integer | not null
document | oid | not null
suffix | character varying(255) |
Each document is loaded using the large object commands and the OID of the load then inserted as a ref number on the document table.
This has been working fine, but I had a major crash on the server and it had to be rebuilt.
The database had been exported (pg_dump) - so I reimported (pg_restore) the database.
If I now look at the oid (ie document.document) of the blob on the numbers are in the range:
3159553408
3159553409
3159553410
3159553411
3159553412
3159553413
3159553414
3159553415
these numbers are above the field I use for the OID in the code (ie an Integer - catering for 2 billion) - so the program no longer finds the document to export.
Any ideas as to why did these OID's become so large after reimporting (I am assuming here that they must have been under 2 billion before or else the process could not have worked before)? And what limit is ther then on this OID?
|
Maybe you need to specify "-o" when pg_dump'ing. From the docs for pg_dump:
-o
--oids
Dump object identifiers (OIDs) as part of the data for every table. Use this option if your application references the OID columns in some way (e.g., in a foreign key constraint). Otherwise, this option should not be used.
On Thu, Oct 16, 2008 at 3:04 PM, Steve T <steve@retsol.co.uk> wrote:All,
Does anyone have an answer to this?
I had thought that by creating a new instance (using initdb) and then pg_restoring the dump there that it would reset the OIDs, but this seems arbitrary.
If I have a set of large objects and I have a pointer on a table to that object, what happens to that pointer when the database is dumped and restored? Is there a way I can force the restored OID to be < 2Billion?
I had expected the OID's to remain static - ie I load an object and it is given OID 1. I then put the value of that OID pointer on a table - so that field also now shows 1. I then dump that database and restore it elsewhere. I expected the OID and the field pointer to have the same value in the restored database, but that doesn't appear to be the case.
On Mon, 2008-09-01 at 14:54 +0100, Steve T wrote:See below.
I'm at a loss here.
I've checked the lo_import / lo_export and I think that I am doing all this correctly.
The original import uses the server side lo_import in an sql statement. The large object appears to have been created ok and the oid value inserted into the document field below. All good so far. The users could then get the large object out - this used a call to psql and a \lo_export to get that OID referred to by the document field. This has been working fine for 2-3 years.
On saturday the live server crashed and was replaced with the backup server. The live data was pg_dump'd (from an 8.0.3 database) prior to the crash and pg_restored (into 8.0.8) onto the backup server. Again all seemed well - no errors reported into the restore and the data looked good. BUT the document field now contains values of >2billion. Is it possible that the dump or the restore 'altered' not only the document field, but also the actual OID of the large object? The data all looks good, in as much as say the document field points to large object 3123456789 and large object oid 3123456789 is correct (ie it does tie up to that record). The problem being is that the program that handles the exporting of the object has never handled OID's bigger than 2billion - so the document field and its related large object would have to hve been <=2billion prior to the dump/restore. Does that make sense and is that feasible?
I have also restored the data into an 8.1.10 database and I see the same results, ie the document field contains values only >3billion - but they do seem to correctly correlate to the large objects as retrieved by \lo_export.
On Mon, 2008-09-01 at 11:01 +0100, Steve T wrote:I have a table that is used to control documents:
recno | integer | not null default nextval('document_recno_seq'::regclass)
foreigntablename | character varying(255) | not null
foreignrecno | integer | not null
docforeigntablename | character varying(255) | not null
docforeignrecno | integer | not null
docforeigntext | character varying(255) | not null
documenttyperecno | integer | not null
version | character varying(20) | not null
loadeddate | date | not null
loadedtime | time without time zone | not null
usersrecno | integer | not null
title | character varying(40) | not null
description | character varying(255) |
doculookupcodesrecno | integer | not null
document | oid | not null
suffix | character varying(255) |
Each document is loaded using the large object commands and the OID of the load then inserted as a ref number on the document table.
This has been working fine, but I had a major crash on the server and it had to be rebuilt.
The database had been exported (pg_dump) - so I reimported (pg_restore) the database.
If I now look at the oid (ie document.document) of the blob on the numbers are in the range:
3159553408
3159553409
3159553410
3159553411
3159553412
3159553413
3159553414
3159553415
these numbers are above the field I use for the OID in the code (ie an Integer - catering for 2 billion) - so the program no longer finds the document to export.
Any ideas as to why did these OID's become so large after reimporting (I am assuming here that they must have been under 2 billion before or else the process could not have worked before)? And what limit is ther then on this OID?
Steve Tucknott
ReTSol Ltd
DDI: 01323 488548
Steve T <steve@retsol.co.uk> writes: > If I have a set of large objects and I have a pointer on a table to that > object, what happens to that pointer when the database is dumped and > restored? Nothing, if you're using a reasonably modern PG version (8.1 or later). pg_dump preserves the OIDs of large objects. In older versions dump/reload didn't preserve those OIDs, but instead pg_dump attempted to update stored OIDs to match the new values. regards, tom lane
Tom,
OK - so what I'm seeing dumping a database from 8.0.8 and restoring it under 8.1.10 isn't what I'd see if I dumped it and restored it all under the same release of 8.0.8?
If I dump and restore under 8.0.8 you would expect the OIDs to be reset?
If I then do the same (ie dump and restore) under 8.1.10 the OIDs used would be preserved for where OIDS have been used? Does that also apply to to the pg_ system tables?
The reason for these questions is that I had a server crash and restored an 8.0.8 dump from the crashed server into an 8.0.8 instance on a backup server. I didn't initdb the instance on the backupserver before doing the restore and after the restore I had a problem with the OIDs being much higher for the same records on the backup than it was on the original. The programming language I use then had a problem retrieving OID values > 2Billion (it always returned 2Billion exactly) - so the application had a problem. I got around that particular issue.
But I now want to revert the OIDs to a 'smaller' value - so I had hoped by dumping the database, initdb and restoring the DB (all under 8.0.8), that it would reset the OIDS down. Would that be the case?
On Thu, 2008-10-16 at 16:02 -0400, Tom Lane wrote:
OK - so what I'm seeing dumping a database from 8.0.8 and restoring it under 8.1.10 isn't what I'd see if I dumped it and restored it all under the same release of 8.0.8?
If I dump and restore under 8.0.8 you would expect the OIDs to be reset?
If I then do the same (ie dump and restore) under 8.1.10 the OIDs used would be preserved for where OIDS have been used? Does that also apply to to the pg_ system tables?
The reason for these questions is that I had a server crash and restored an 8.0.8 dump from the crashed server into an 8.0.8 instance on a backup server. I didn't initdb the instance on the backupserver before doing the restore and after the restore I had a problem with the OIDs being much higher for the same records on the backup than it was on the original. The programming language I use then had a problem retrieving OID values > 2Billion (it always returned 2Billion exactly) - so the application had a problem. I got around that particular issue.
But I now want to revert the OIDs to a 'smaller' value - so I had hoped by dumping the database, initdb and restoring the DB (all under 8.0.8), that it would reset the OIDS down. Would that be the case?
On Thu, 2008-10-16 at 16:02 -0400, Tom Lane wrote:
Steve T <steve@retsol.co.uk> writes: > If I have a set of large objects and I have a pointer on a table to that > object, what happens to that pointer when the database is dumped and > restored? Nothing, if you're using a reasonably modern PG version (8.1 or later). pg_dump preserves the OIDs of large objects. In older versions dump/reload didn't preserve those OIDs, but instead pg_dump attempted to update stored OIDs to match the new values. regards, tom lane
|
Harold,
Thanks for the pointer!
On Thu, 2008-10-16 at 15:15 -0400, Harold A. Giménez Ch. wrote:
Thanks for the pointer!
On Thu, 2008-10-16 at 15:15 -0400, Harold A. Giménez Ch. wrote:
Maybe you need to specify "-o" when pg_dump'ing. From the docs for pg_dump:
-o
--oids
Dump object identifiers (OIDs) as part of the data for every table. Use this option if your application references the OID columns in some way (e.g., in a foreign key constraint). Otherwise, this option should not be used.
On Thu, Oct 16, 2008 at 3:04 PM, Steve T <steve@retsol.co.uk> wrote:
All,
Does anyone have an answer to this?
I had thought that by creating a new instance (using initdb) and then pg_restoring the dump there that it would reset the OIDs, but this seems arbitrary.
If I have a set of large objects and I have a pointer on a table to that object, what happens to that pointer when the database is dumped and restored? Is there a way I can force the restored OID to be < 2Billion?
I had expected the OID's to remain static - ie I load an object and it is given OID 1. I then put the value of that OID pointer on a table - so that field also now shows 1. I then dump that database and restore it elsewhere. I expected the OID and the field pointer to have the same value in the restored database, but that doesn't appear to be the case.
On Mon, 2008-09-01 at 14:54 +0100, Steve T wrote:See below.
I'm at a loss here.
I've checked the lo_import / lo_export and I think that I am doing all this correctly.
The original import uses the server side lo_import in an sql statement. The large object appears to have been created ok and the oid value inserted into the document field below. All good so far. The users could then get the large object out - this used a call to psql and a \lo_export to get that OID referred to by the document field. This has been working fine for 2-3 years.
On saturday the live server crashed and was replaced with the backup server. The live data was pg_dump'd (from an 8.0.3 database) prior to the crash and pg_restored (into 8.0.8) onto the backup server. Again all seemed well - no errors reported into the restore and the data looked good. BUT the document field now contains values of >2billion. Is it possible that the dump or the restore 'altered' not only the document field, but also the actual OID of the large object? The data all looks good, in as much as say the document field points to large object 3123456789 and large object oid 3123456789 is correct (ie it does tie up to that record). The problem being is that the program that handles the exporting of the object has never handled OID's bigger than 2billion - so the document field and its related large object would have to hve been <=2billion prior to the dump/restore. Does that make sense and is that feasible?
I have also restored the data into an 8.1.10 database and I see the same results, ie the document field contains values only >3billion - but they do seem to correctly correlate to the large objects as retrieved by \lo_export.
On Mon, 2008-09-01 at 11:01 +0100, Steve T wrote:I have a table that is used to control documents:
recno | integer | not null default nextval('document_recno_seq'::regclass)
foreigntablename | character varying(255) | not null
foreignrecno | integer | not null
docforeigntablename | character varying(255) | not null
docforeignrecno | integer | not null
docforeigntext | character varying(255) | not null
documenttyperecno | integer | not null
version | character varying(20) | not null
loadeddate | date | not null
loadedtime | time without time zone | not null
usersrecno | integer | not null
title | character varying(40) | not null
description | character varying(255) |
doculookupcodesrecno | integer | not null
document | oid | not null
suffix | character varying(255) |
Each document is loaded using the large object commands and the OID of the load then inserted as a ref number on the document table.
This has been working fine, but I had a major crash on the server and it had to be rebuilt.
The database had been exported (pg_dump) - so I reimported (pg_restore) the database.
If I now look at the oid (ie document.document) of the blob on the numbers are in the range:
3159553408
3159553409
3159553410
3159553411
3159553412
3159553413
3159553414
3159553415
these numbers are above the field I use for the OID in the code (ie an Integer - catering for 2 billion) - so the program no longer finds the document to export.
Any ideas as to why did these OID's become so large after reimporting (I am assuming here that they must have been under 2 billion before or else the process could not have worked before)? And what limit is ther then on this OID?
Steve Tucknott
ReTSol Ltd
DDI: 01323 488548
|