Re: [ADMIN] Problems with Large Objects using Postgres 7.2.1 - Mailing list pgsql-jdbc

From Tom Lane
Subject Re: [ADMIN] Problems with Large Objects using Postgres 7.2.1
Date
Msg-id 9428.1049916013@sss.pgh.pa.us
Whole thread Raw
In response to Re: [ADMIN] Problems with Large Objects using Postgres 7.2.1  ("Chris White" <cjwhite@cisco.com>)
Responses Re: [ADMIN] Problems with Large Objects using Postgres 7.2.1
List pgsql-jdbc
"Chris White" <cjwhite@cisco.com> writes:
> Looking at our code further, the actual code writes the large object commits
> it, opens the large object updates the header of the large object (first 58
> bytes) with some length info using seeks, then writes and commits the object
> again, before updating and committing the associated tables. The data I saw
> in the exported file was the header info without the updates for the length
> info i.e. after the first commit!!

Oh, that's interesting.  I wonder whether you could be running into some
variant of this issue:
http://archives.postgresql.org/pgsql-hackers/2002-05/msg00875.php

I looked a little bit at fixing this, but wasn't sure how to get the
appropriate snapshot passed to the LO functions --- the global
QuerySnapshot might not be the right thing, but then what is?  Also,
what if a transaction opens multiple LO handles for the same object
--- should they be able to see each others' updates?  (I'm not sure
we could prevent it, so this may be moot.)

BTW what do you mean exactly by "commit" above?  There is no notion of
committing a large object separately from committing a transaction.

            regards, tom lane


pgsql-jdbc by date:

Previous
From: Nic Ferrier
Date:
Subject: Re: Callable Statements
Next
From: "Iran"
Date:
Subject: Problems retrieving data from bytea field