Thread: Relation 'pg_largeobject' does not exist

Relation 'pg_largeobject' does not exist

From
Brandon Keepers
Date:
I'm trying to upgrade a postgresql 7.0.3 database that uses large
objects to a more recent version, but I'm not able to export the blobs.
pg_dumplo was added in 7.1, so I tried compiling and running that
against the 7.0.3 database, but I get the following error:

./contrib/pg_dumplo/pg_dumplo: Failed to get LO OIDs:
ERROR:  Relation 'pg_largeobject' does not exist

Exported 0 large objects.


In my searches, I haven't found any other suggestions besides using
pg_dumplo.  Does any one have any other suggestions?

Thanks,
Brandon


Re: Relation 'pg_largeobject' does not exist

From
Tom Lane
Date:
Brandon Keepers <bkeepers@gmail.com> writes:
> I'm trying to upgrade a postgresql 7.0.3 database that uses large
> objects to a more recent version, but I'm not able to export the blobs.

pg_dump should work.  If using a pg_dump version older than 8.1, you
need to use -b switch and a non-default output format (I'd suggest -Fc).

            regards, tom lane

Re: Relation 'pg_largeobject' does not exist

From
Brandon Keepers
Date:
Tom,

On Mon, 2006-03-13 at 20:38 -0500, Tom Lane wrote:
> pg_dump should work.  If using a pg_dump version older than 8.1, you
> need to use -b switch and a non-default output format (I'd suggest -Fc).
>
>             regards, tom lane

Thanks for your quick response!  I had actually just been trying that
(with 7.1) and came across another error:

NOTICE:  ShmemAlloc: out of memory
NOTICE:  LockAcquire: xid table corrupted
dumpBlobs(): Could not open large object.  Explanation from backend:
'ERROR:  LockRelation: LockAcquire failed
'.

Any ideas what would be causing that?  I'm going to try compiling a
newer version and running it again.

Thanks,
Brandon


Re: Relation 'pg_largeobject' does not exist

From
Tom Lane
Date:
Brandon Keepers <bkeepers@gmail.com> writes:
> Thanks for your quick response!  I had actually just been trying that
> (with 7.1) and came across another error:

> NOTICE:  ShmemAlloc: out of memory
> NOTICE:  LockAcquire: xid table corrupted
> dumpBlobs(): Could not open large object.  Explanation from backend:
> 'ERROR:  LockRelation: LockAcquire failed

Ugh :-(  How many blobs have you got, thousands?  7.0 stores each blob
as a separate table, and I'll bet it is running out of lock table space
to hold a lock on each one.  My recollection is that we converted blob
storage to a single pg_largeobject table precisely because of that
problem.

What you'll need to do to get around this is to export each blob in a
separate transaction (or at least no more than a thousand or so blobs
per transaction).  It looks like pg_dumplo might be easier to hack to do
things that way --- like pg_dump, it puts a BEGIN/COMMIT around the
whole run, but it's a smaller program and easier to move those commands
in.

Another possibility is to increase the lock table size, but that would
probably require recompiling the 7.0 backend.  If you're lucky,
increasing max_connections to the largest value the backend will support
will be enough.  If you've got many thousands of blobs there's no hope
there, but if it's just a few thousand this is worth a try before you go
hacking code.

            regards, tom lane

Re: Relation 'pg_largeobject' does not exist

From
"Brandon Keepers"
Date:
On 3/13/06, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Brandon Keepers <bkeepers@gmail.com> writes:
> > Thanks for your quick response!  I had actually just been trying that
> > (with 7.1) and came across another error:
>
> > NOTICE:  ShmemAlloc: out of memory
> > NOTICE:  LockAcquire: xid table corrupted
> > dumpBlobs(): Could not open large object.  Explanation from backend:
> > 'ERROR:  LockRelation: LockAcquire failed
>
> Ugh :-(  How many blobs have you got, thousands?  7.0 stores each blob
> as a separate table, and I'll bet it is running out of lock table space
> to hold a lock on each one.  My recollection is that we converted blob
> storage to a single pg_largeobject table precisely because of that
> problem.

Looks like there's over 17,000 blobs.  :(  But they're all very small,
if that makes a difference.

> What you'll need to do to get around this is to export each blob in a
> separate transaction (or at least no more than a thousand or so blobs
> per transaction).  It looks like pg_dumplo might be easier to hack to do
> things that way --- like pg_dump, it puts a BEGIN/COMMIT around the
> whole run, but it's a smaller program and easier to move those commands
> in.

Unfortunately, I don't know C.  Would someone be willing to help me
hack pg_dumplo in exchange for money?

> Another possibility is to increase the lock table size, but that would
> probably require recompiling the 7.0 backend.  If you're lucky,
> increasing max_connections to the largest value the backend will support
> will be enough.  If you've got many thousands of blobs there's no hope
> there, but if it's just a few thousand this is worth a try before you go
> hacking code.

I'm not the admin of the box that this database is on, so I don't have
any control over it.  I'm working on moving it to a box that I am the
admin of.  But anyway, it sounds like this wouldn't work anyway since
I have so many blobs.


>                         regards, tom lane


Thanks, again for your help, Tom!

Brandon

Re: Relation 'pg_largeobject' does not exist

From
Tom Lane
Date:
"Brandon Keepers" <bkeepers@gmail.com> writes:
> On 3/13/06, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> Ugh :-(  How many blobs have you got, thousands?

> Looks like there's over 17,000 blobs.  :(  But they're all very small,
> if that makes a difference.

No, it's just the number of 'em that counts.

>> Another possibility is to increase the lock table size, but that would
>> probably require recompiling the 7.0 backend.  If you're lucky,
>> increasing max_connections to the largest value the backend will support
>> will be enough.

> I'm not the admin of the box that this database is on, so I don't have
> any control over it.  I'm working on moving it to a box that I am the
> admin of.  But anyway, it sounds like this wouldn't work anyway since
> I have so many blobs.

7.0 sets the lock table size to 64 * max_connections, so if you can
crank max_connections up to 300 or so you should be able to dump.
I think this will work ... it's definitely worth a shot before you
start thinking about hacking the code.

            regards, tom lane

Re: Relation 'pg_largeobject' does not exist

From
Brandon Keepers
Date:
On Tue, 2006-03-14 at 23:09 -0500, Tom Lane wrote:
> 7.0 sets the lock table size to 64 * max_connections, so if you can
> crank max_connections up to 300 or so you should be able to dump.
> I think this will work ... it's definitely worth a shot before you
> start thinking about hacking the code.
>
>             regards, tom lane

That didn't seem to make a difference.  The dump failed in the same
place (dump file 12MB) with the same message.

Brandon


Re: Relation 'pg_largeobject' does not exist

From
"Brandon Keepers"
Date:
If it makes a difference, when I un-tar the dump file from each failed
dump, it always has 2937 files in it.  I tried this using an old copy
of the data directory that had significantly less blobs in it and got
the same result.

On 3/16/06, Brandon Keepers <bkeepers@gmail.com> wrote:
> That didn't seem to make a difference.  The dump failed in the same
> place (dump file 12MB) with the same message.
>
> Brandon
>

Re: Relation 'pg_largeobject' does not exist

From
Tom Lane
Date:
"Brandon Keepers" <bkeepers@gmail.com> writes:
> If it makes a difference, when I un-tar the dump file from each failed
> dump, it always has 2937 files in it.

That's pretty consistent with the idea that you're running out of
locktable space --- you'd expect that to fail at a certain number of
dumped blobs.

However, this makes me wonder whether you really did successfully
increase max_connections as I suggested; that should have had at least
some impact.  7.0 is far enough back that I don't remember exactly where
one sets the maximum number of backends in that version --- better
double-check the docs for 7.0.

            regards, tom lane

Re: Relation 'pg_largeobject' does not exist

From
Brandon Keepers
Date:
Tom,

On Fri, 2006-03-17 at 23:48 -0500, Tom Lane wrote:
> "Brandon Keepers" <bkeepers@gmail.com> writes:
> > If it makes a difference, when I un-tar the dump file from each failed
> > dump, it always has 2937 files in it.
>
> That's pretty consistent with the idea that you're running out of
> locktable space --- you'd expect that to fail at a certain number of
> dumped blobs.
>
> However, this makes me wonder whether you really did successfully
> increase max_connections as I suggested; that should have had at least
> some impact.  7.0 is far enough back that I don't remember exactly where
> one sets the maximum number of backends in that version --- better
> double-check the docs for 7.0.
>
>             regards, tom lane


The 7.0 docs revealed that I needed to start up postmaster with "-N
300".  I tried to get the admin of the box to do that, but it kept
failing at exactly the same spot.  I ended up just having the admin of
the server send me a copy of the data directory, and I compiled 7.0.3
myself and it worked beautifully.

Thanks a lot for your help!

Brandon