Re: [ADMIN] Problems with Large Objects using Postgres 7.2.1 - Mailing list pgsql-jdbc

From Chris White
Subject Re: [ADMIN] Problems with Large Objects using Postgres 7.2.1
Date
Msg-id 010c01c2feb4$a6ebe100$ff926b80@amer.cisco.com
Whole thread Raw
In response to Problems with Large Objects using Postgres 7.2.1  ("Chris White" <cjwhite@cisco.com>)
Responses Re: [ADMIN] Problems with Large Objects using Postgres 7.2.1  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-jdbc
Never saw any responses to this. Anybody have any ideas?
-----Original Message-----
From: pgsql-admin-owner@postgresql.org [mailto:pgsql-admin-owner@postgresql.org]On Behalf Of Chris White
Sent: Thursday, April 03, 2003 8:36 AM
To: pgsql-jdbc@postgresql.org; pgsql-admin@postgresql.org
Subject: [ADMIN] Problems with Large Objects using Postgres 7.2.1

I have a Java application which writes large objects to the database using the JDBC interface. The application reads binary input from an input stream and writes large objects to the database in 8K chunks and calculates the length of the data. At the end of the input stream it closes the large object and commits the large object and then updates associated tables with the large object id and large object length and commits that info to the database. The application has multiple threads (max 8)  simultaneously writing these large objects each using their own connection. Whenever the system has a problem we have a monitor application which detects a need for a system shutdown and shuts down Postgres using a smart shutdown.
 
What I am seeing is that when all 8 threads are running and the system is shutdown, large objects committed in transactions near to the shutdown are corrupt when the database is restarted. I know the large objects are committed, because the associated entries in the tables which point to the large objects are present after the restart with valid information about the large object length and oid. However when I access the large objects I am only returned a 2K chunk even though the table entry tells me the entry should be 320K.
 
Anybody have any ideas what is the problem? Are there any know issues with the recovery of large objects?
 
Chris White

pgsql-jdbc by date:

Previous
From: Nic Ferrier
Date:
Subject: Re: Callable Statements
Next
From: Tom Lane
Date:
Subject: Re: [ADMIN] Problems with Large Objects using Postgres 7.2.1