Tom,
I didn't looked at the data in the table. However, when I did a lo_export of
one of the objects I only got a 2K file output.
Next time this happens I will look at the table data.
Chris
-----Original Message-----
From: pgsql-admin-owner@postgresql.org
[mailto:pgsql-admin-owner@postgresql.org]On Behalf Of Tom Lane
Sent: Wednesday, April 09, 2003 9:51 AM
To: cjwhite@cisco.com
Cc: pgsql-jdbc@postgresql.org; pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Problems with Large Objects using Postgres 7.2.1
"Chris White" <cjwhite@cisco.com> writes:
> What I am seeing is that when all 8 threads are running and the system
is
> shutdown, large objects committed in transactions near to the shutdown are
> corrupt when the database is restarted. I know the large objects are
> committed, because the associated entries in the tables which point to the
> large objects are present after the restart with valid information about
the
> large object length and oid. However when I access the large objects I am
> only returned a 2K chunk even though the table entry tells me the entry
> should be 320K.
Hmm. Have you tried looking directly into pg_largeobject to see what
row(s) are present for the particular LO ID? Is the data that's there
valid?
> Anybody have any ideas what is the problem? Are there any know issues
with
> the recovery of large objects?
No, news to me. I would suggest that you should be running 7.2.4, not
7.2.1; we don't make dot-releases just to keep busy. But offhand I
don't know of any recent reports of symptoms like this.
regards, tom lane
---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?
http://archives.postgresql.org