Problems with Large Objects using Postgres 7.2.1 - Mailing list pgsql-admin

From Chris White
Subject Problems with Large Objects using Postgres 7.2.1
Date
Msg-id 000f01c2f9ff$0f9e6250$ff926b80@amer.cisco.com
Whole thread Raw
Responses Re: Problems with Large Objects using Postgres 7.2.1
List pgsql-admin
I have a Java application which writes large objects to the database using the JDBC interface. The application reads binary input from an input stream and writes large objects to the database in 8K chunks and calculates the length of the data. At the end of the input stream it closes the large object and commits the large object and then updates associated tables with the large object id and large object length and commits that info to the database. The application has multiple threads (max 8)  simultaneously writing these large objects each using their own connection. Whenever the system has a problem we have a monitor application which detects a need for a system shutdown and shuts down Postgres using a smart shutdown.
 
What I am seeing is that when all 8 threads are running and the system is shutdown, large objects committed in transactions near to the shutdown are corrupt when the database is restarted. I know the large objects are committed, because the associated entries in the tables which point to the large objects are present after the restart with valid information about the large object length and oid. However when I access the large objects I am only returned a 2K chunk even though the table entry tells me the entry should be 320K.
 
Anybody have any ideas what is the problem? Are there any know issues with the recovery of large objects?
 
Chris White

pgsql-admin by date:

Previous
From: Michael Brusser
Date:
Subject: How to find out database size
Next
From:
Date:
Subject: Re: How to find out database size