Thread: OID Overflow for large objects

OID Overflow for large objects

From
Jeff Boes
Date:
We had a puzzling situation occur last weekend. Subsequently, I figured out how
to work around it for now, but of course those who sign my checks want to know
how we can nail down forever the possibility that something like that will ever
happen again ...

The OID value for large objects crossed the 2**31 boundary, and some PHP code
stopped working (it would pull the OID value from one of our tables, then do a
lo_export call to retrieve the BLOB; however, once the value passed 2**31, it
failed because the internal library was treating the value as a signed rather
than unsigned integer).

I didn't write the PHP code, and the library is (I'm told) the PHP equivalent to
the DBI layer for Perl (which doesn't seem to have any such problem, so there
PHP freeks! 8-), so I couldn't really change that. What I did was to cast the
OID to a float, which fooled the library into treating it properly. (Thank you,
Google Groups!)

Meanwhile, we're busy counting on our fingers and toes to figure out when the
2**32 boundary will be crossed. Our best guess is that it took us around a year
to make it to the first threshold. We plan to upgrade that DB soon, which means
dump and reload, which means resetting the OID counter. So that might give us a
couple of years.

I found some indication of the problem referenced in the Pg FAQ
(http://www3.sk.postgresql.org/docs/faqs/FAQ.html):

OIDs are stored as 4-byte integers, and will overflow at 4 billion. No one has
reported this ever happening, and we plan to have the limit removed before
anyone does.


Re: OID Overflow for large objects

From
Tom Lane
Date:
Jeff Boes <jboes@nexcerpt.com> writes:
> I found some indication of the problem referenced in the Pg FAQ
> (http://www3.sk.postgresql.org/docs/faqs/FAQ.html):

> OIDs are stored as 4-byte integers, and will overflow at 4 billion. No
> one has reported this ever happening, and we plan to have the limit
> removed before anyone does.

That comment in the FAQ seems quite out-of-date.

What will actually happen is that the OID generator will wrap around.
This will not bother Postgres particularly, but you may start having
occasional transaction failures due to duplicate OIDs --- for example,
I believe lo_create will fail if the OID it selects already exists in
pg_largeobject.

            regards, tom lane

Re: OID Overflow for large objects

From
Jeff Boes
Date:
Tom Lane wrote:

>>(http://www3.sk.postgresql.org/docs/faqs/FAQ.html):
>>
>>
>>OIDs are stored as 4-byte integers, and will overflow at 4 billion. No
>>one has reported this ever happening, and we plan to have the limit
>>removed before anyone does.
>>
>>
>
>That comment in the FAQ seems quite out-of-date.
>
>What will actually happen is that the OID generator will wrap around.
>This will not bother Postgres particularly, but you may start having
>occasional transaction failures due to duplicate OIDs --- for example,
>I believe lo_create will fail if the OID it selects already exists in
>pg_largeobject.
>
>
>

Pardon my incredulity, but doesn't that seem like a bug? Or at least a
limitation? Does this mean that the effective useful lifetime of
pg_largeobject is only as long as it takes to wrap around, after which
you *must* dump and reload to prevent problems like this?

(I realize this is pretty much the same issue as having a sequence
number on a table, but if I'm interpreting all this correctly, the OID
wrap-around is going to occur a lot sooner than my table sequence number
wrap-around.)


--
Jeff Boes                                      vox 269.226.9550 ext 24
Database Engineer                                     fax 269.349.9076
Nexcerpt, Inc.                                 http://www.nexcerpt.com
           ...Nexcerpt... Extend your Expertise