Re: [ADMIN] Problems with Large Objects using Postgres 7.2.1 - Mailing list pgsql-jdbc

From Chris White
Subject Re: [ADMIN] Problems with Large Objects using Postgres 7.2.1
Date
Msg-id 012f01c2feca$1a3ab860$ff926b80@amer.cisco.com
Whole thread Raw
In response to Re: [ADMIN] Problems with Large Objects using Postgres 7.2.1  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: [ADMIN] Problems with Large Objects using Postgres 7.2.1
List pgsql-jdbc
Looking at our code further, the actual code writes the large object commits
it, opens the large object updates the header of the large object (first 58
bytes) with some length info using seeks, then writes and commits the object
again, before updating and committing the associated tables. The data I saw
in the exported file was the header info without the updates for the length
info i.e. after the first commit!!

Chris

-----Original Message-----
From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
Sent: Wednesday, April 09, 2003 10:28 AM
To: cjwhite@cisco.com
Cc: pgsql-jdbc@postgresql.org; pgsql-admin@postgresql.org
Subject: Re: [JDBC] [ADMIN] Problems with Large Objects using Postgres
7.2.1


"Chris White" <cjwhite@cisco.com> writes:
> I didn't looked at the data in the table. However, when I did a lo_export
of
> one of the objects I only got a 2K file output.

IIRC, we store 2K per row in pg_largeobject.  So this is consistent with
the idea that row 0 is present for the LO ID, while row 1 is not.  What
I'm wondering is if the other hundred-odd rows that would be needed to
hold a 300K large object are there or not.  Also, do the rows contain
the appropriate data for their parts of the overall large object?

            regards, tom lane


pgsql-jdbc by date:

Previous
From: "Cris"
Date:
Subject: Index not used,
Next
From: Barry Lind
Date:
Subject: Re: Index not used,