Thread: blob without largeobject api

blob without largeobject api

From
Edoardo Ceccarelli
Date:
I need to insert a Blob into a table without using
org.postgresql.largeobject.*; classes, that's because the db pool I'm
using (Resin) doesn't allow me to cast:
((org.postgresql.PGConnection)db)  to get access at the Postgres
LargeObjectAPI.

// Generates ClassCastExep.
LargeObjectManager lom =
((org.postgresql.PGConnection)db).getLargeObjectAPI();

Since I'd like to keep using the same type of connection (without
changing the pool manager or directly connecting to the db) is there a
way to insert Blob (oid) data into a table without using the large
object api?

Looked in the examples under src/interfaces/jdbc/ but all blob examples
uses that type of access.

Thank You

--
Edoardo Ceccarelli





Re: blob without largeobject api

From
Oliver Jowett
Date:
Edoardo Ceccarelli wrote:
> I need to insert a Blob into a table without using
> org.postgresql.largeobject.*; classes, that's because the db pool I'm
> using (Resin) doesn't allow me to cast:
> ((org.postgresql.PGConnection)db)  to get access at the Postgres
> LargeObjectAPI.
>
> // Generates ClassCastExep.
> LargeObjectManager lom =
> ((org.postgresql.PGConnection)db).getLargeObjectAPI();
>
> Since I'd like to keep using the same type of connection (without
> changing the pool manager or directly connecting to the db) is there a
> way to insert Blob (oid) data into a table without using the large
> object api?

1) Check you are using a recent driver; pre-7.4 drivers had a bug where
proxy connections handed out by the driver's PooledConnection
implementation (which is probably what Resin is using) did not implement
PGConnection, which may be the cause of your CCE.

2) Add '?compatible=7.1' (or '&compatible=7.1' if you already have URL
parameters present) to your driver URL and use
PreparedStatement.setBytes() / ResultSet.getBytes(). Note that you won't
be able to use these methods to access bytea fields if you do this, and
might change other behaviour of the driver too.

-O

Re: blob without largeobject api

From
Edoardo Ceccarelli
Date:
I was thinking to get this chance and drop the Resin pool to start using
Postgres one for all database connections in my app.
Can you give me some hints about its reliability and point me some
example code?

Thanks again
Edoardo
PS: the procedure to clean up not referenced LO's would be needed also
using LargeObjectAPI?


Oliver Jowett ha scritto:

> Edoardo Ceccarelli wrote:
>
>> Tried with a driver downloaded from http://jdbc.postgresql.org/
>>
>> pg74.213.jdbc3.jar
>>
>> the result is always the same:
>> java.lang.ClassCastException: com.caucho.sql.UserConnection
>> on row:
>> LargeObjectManager lom =
>> ((org.postgresql.PGConnection)db).getLargeObjectAPI();
>
>
> Ok, this means your pooling implementation is not using the postgresql
> driver's PooledConnection to generate the proxy connections to hand
> out, but is implementing them itself (using instances of
> com.caucho.sql.UserConnection, it appears). So you won't be able to
> get at postgresql-specific interfaces with that pooling
> implementation, regardless of what the driver does.
>
>> I think that the second suggestion is about using a "standard" way to
>> save a blob field, I have tried something like this:
>> - insert record without blob
>> - open record for update and use setBytes() to save blob data
>> having some minor problems so far but I'd like to know if this is the
>> right way to do it.
>
>
> That's the basic idea, although there's no real need to separate the
> insert and update. With compatible=7.1 in effect, setBytes() becomes
> "create an appropriate LO and set the parameter to the new OID". The
> main gotcha is that you must wrap any queries that deal with LOs in a
> transaction; autocommit won't work. You'll also need a mechanism to
> clean up LOs that no longer have references to them in the main DB (I
> think there might be something in contrib/ to do this?)
>
> -O
>
>
>

Re: blob without largeobject api

From
Edoardo Ceccarelli
Date:
Tried with a driver downloaded from http://jdbc.postgresql.org/

pg74.213.jdbc3.jar

the result is always the same:
java.lang.ClassCastException: com.caucho.sql.UserConnection
on row:
LargeObjectManager lom =
((org.postgresql.PGConnection)db).getLargeObjectAPI();

I think that the second suggestion is about using a "standard" way to
save a blob field, I have tried something like this:
- insert record without blob
- open record for update and use setBytes() to save blob data
having some minor problems so far but I'd like to know if this is the
right way to do it.

thanks again
Edoardo

Oliver Jowett ha scritto:

> Edoardo Ceccarelli wrote:
>
>> I need to insert a Blob into a table without using
>> org.postgresql.largeobject.*; classes, that's because the db pool I'm
>> using (Resin) doesn't allow me to cast:
>> ((org.postgresql.PGConnection)db)  to get access at the Postgres
>> LargeObjectAPI.
>>
>> // Generates ClassCastExep.
>> LargeObjectManager lom =
>> ((org.postgresql.PGConnection)db).getLargeObjectAPI();
>>
>> Since I'd like to keep using the same type of connection (without
>> changing the pool manager or directly connecting to the db) is there
>> a way to insert Blob (oid) data into a table without using the large
>> object api?
>
>
> 1) Check you are using a recent driver; pre-7.4 drivers had a bug
> where proxy connections handed out by the driver's PooledConnection
> implementation (which is probably what Resin is using) did not
> implement PGConnection, which may be the cause of your CCE.
>
> 2) Add '?compatible=7.1' (or '&compatible=7.1' if you already have URL
> parameters present) to your driver URL and use
> PreparedStatement.setBytes() / ResultSet.getBytes(). Note that you
> won't be able to use these methods to access bytea fields if you do
> this, and might change other behaviour of the driver too.
>
> -O
>
>
>

Re: blob without largeobject api

From
Oliver Jowett
Date:
Edoardo Ceccarelli wrote:
> Tried with a driver downloaded from http://jdbc.postgresql.org/
>
> pg74.213.jdbc3.jar
>
> the result is always the same:
> java.lang.ClassCastException: com.caucho.sql.UserConnection
> on row:
> LargeObjectManager lom =
> ((org.postgresql.PGConnection)db).getLargeObjectAPI();

Ok, this means your pooling implementation is not using the postgresql
driver's PooledConnection to generate the proxy connections to hand out,
but is implementing them itself (using instances of
com.caucho.sql.UserConnection, it appears). So you won't be able to get
at postgresql-specific interfaces with that pooling implementation,
regardless of what the driver does.

> I think that the second suggestion is about using a "standard" way to
> save a blob field, I have tried something like this:
> - insert record without blob
> - open record for update and use setBytes() to save blob data
> having some minor problems so far but I'd like to know if this is the
> right way to do it.

That's the basic idea, although there's no real need to separate the
insert and update. With compatible=7.1 in effect, setBytes() becomes
"create an appropriate LO and set the parameter to the new OID". The
main gotcha is that you must wrap any queries that deal with LOs in a
transaction; autocommit won't work. You'll also need a mechanism to
clean up LOs that no longer have references to them in the main DB (I
think there might be something in contrib/ to do this?)

-O

Re: blob without largeobject api

From
"David Wall"
Date:
Here's how we do it:

First, in your table, use the OID type (this matches to the BLOB type in
Oracle):

raw_data OID NOT NULL,

The JDBC SQL is something like

INSERT INTO yourtable (raw_data) VALUES (?);

(With Oracle, this is different in that you need to insert an empty_blob()
and then update it)

Then we created a wrapper blob object and create the BLOB object from a byte
array, but you can also stream it if that's important to you:

YoByteBlob attBlob = new YoByteBlob( (byteArrayOfData );
int num = stmt.executeUpdate();

And yes, you do need to wrap this in a transaction, but that should be
standard operating procedure for anything but toy applications in my opinion
since who wants to deal with the troubles of out of sync data.


Retrieving it is pretty straightforward, too:

java.sql.Blob b = resultSet.getBlob(1);
YoByteBlob ybb = new YoByteBlob(b);

David

Our YoByteBlob wrapp class (below) has been cleared to release on this list
(in the end, it's a shame a simple Blob implementation was not provided as
part of the JDBC spec).  Note that this wrapper doesn't allow for streaming
of blobs, so it may not meet your needs as it does our (our blobs go through
compression and encryption before being stuffed in the db):

// Copyright (c) 2002-2003 Yozons, Inc. All rights reserved.

// This file is proprietary and confidential.

//

package com.yozons.jdbc;

import java.sql.SQLException;

/**

* Screwy wrapper class so that we can insert a Blob into the database from a
byte array.

* Includes more screwy stuff for Oracle specific updating of a blob (the
only way to insert a new blob).

*

* @author David Wall

*/

public class YoByteBlob

implements java.sql.Blob

{

byte[] bytes = null;


/**

* Creates a YoByteBlob using the specified byte array.

*/

public YoByteBlob(byte[] b)

{

bytes = b;

}


// My own constructor for taking a Blob of input and returning as an array

public YoByteBlob(java.sql.Blob b)

{

java.io.InputStream is = null;

try

{

is = b.getBinaryStream();

bytes = new byte[(int)b.length()];

is.read(bytes);

}

catch( java.sql.SQLException e )

{

bytes = null;

}

catch( java.io.IOException e )

{

bytes = null;

}

finally

{

try

{

if ( is != null )

is.close();

}

catch( Exception e ) {}

}

}


public long length()

throws java.sql.SQLException

{

return bytes.length;

}


// My own API call for simplicity

public byte[] getBytes()

{

return bytes;

}


public byte[] getBytes(long pos, int length)

throws java.sql.SQLException

{

if ( pos == 0 && length == bytes.length )

return bytes;


try

{

byte[] newbytes = new byte[length];

System.arraycopy(bytes, (int)pos, newbytes, 0, length);

return newbytes;

}

catch( Exception e )

{

throw new java.sql.SQLException("Could not get subset of array for
YoByteBlob");

}

}


public java.io.InputStream getBinaryStream()

throws java.sql.SQLException

{

return new java.io.ByteArrayInputStream(bytes);

}


public int setBytes(long pos,

byte[] bytes,

int offset,

int len)

throws java.sql.SQLException

{

throw new java.sql.SQLException("Unsupported setBytes() for YoByteBlob");

}

public int setBytes(long pos,

byte[] bytes)

throws java.sql.SQLException

{

throw new java.sql.SQLException("Unsupported setBytes() for YoByteBlob");

}


public java.io.OutputStream setBinaryStream(long pos)

throws java.sql.SQLException

{

throw new java.sql.SQLException("Unsupported setBinaryStream() for
YoByteBlob");

}


public void truncate(long len)

throws java.sql.SQLException

{

throw new java.sql.SQLException("Unsupported truncate() for YoByteBlob");

}


public long position(byte[] pattern, long start)

throws java.sql.SQLException

{

throw new java.sql.SQLException("Unsupported position() for YoByteBlob");

}


public long position(java.sql.Blob pattern, long start)

throws java.sql.SQLException

{

throw new java.sql.SQLException("Unsupported position() for YoByteBlob");

}



/**

* Routine used to put the "real" object into an Oracle database, which
requires

* creating an empty blob, then retrieving it again and updating it from
there.

*/

public void updateOracleBlob(java.sql.Blob b)

throws java.sql.SQLException

{

java.io.OutputStream outstream = null;


try

{

if ( b == null )

throw new SQLException("YoByteBlob.updateOracleBlob() blob was null");

if ( ! (b instanceof oracle.sql.BLOB) )

throw new SQLException("YoByteBlob.updateOracleBlob() blob not an
oracle.sql.BLOB object; is: " +

b.getClass().getName() );

if ( bytes == null )

throw new SQLException("YoByteBlob.updateOracleBlob() no blob bytes to
write");


oracle.sql.BLOB blob = (oracle.sql.BLOB)b;

outstream = blob.getBinaryOutputStream();


int bufSize = blob.getBufferSize();

int pos = 0;

int remaining = bytes.length;

while ( remaining > 0 )

{

int numOut = Math.min(bufSize,remaining);

outstream.write(bytes, pos, numOut);

pos += numOut;

remaining -= numOut;

}

}

catch( java.io.IOException e )

{

throw new java.sql.SQLException("YoByteBlob.updateOracleBlob() I/O failure:
" + e.getMessage());

}

finally

{

try

{

if ( outstream != null )

outstream.close();

}

catch( java.io.IOException e )

{

throw new java.sql.SQLException("YoByteBlob.updateOracleBlob() close I/O
failure: " + e.getMessage());

}

}

}

}



Re: blob without largeobject api

From
Oliver Jowett
Date:
Edoardo Ceccarelli wrote:
> I was thinking to get this chance and drop the Resin pool to start using
> Postgres one for all database connections in my app.
> Can you give me some hints about its reliability and point me some
> example code?

I don't use a 3rd party connection pool for various reasons so I can't
really recommend one. I think the driver does provide a pool
implementation, but it didn't seem too flexible at a glance -- and it's
going to blindside any app server you're using since it just looks like
a DataSource..

What you really want is a pooling implementation that uses the
implementations of ConnectionPoolDataSource provided by the driver:
org.postgresql.jdbc2.optional.ConnectionPool or
org.postgresql.jdbc3.ConnectionPool. Despite the name, these classes
provide *support* for connection pool implementations but no actual
pooling implementation or policy.

What they do is provide a means for the actual pooling implementation to
get PooledConnection objects representing physical connections to the
DB; these are the objects that get pooled and reused. Each
PooledConnection allows you to get proxy connection handles that look
like normal JDBC connections, except that they actually delegate back to
the underlying physical connection. Closing a proxy connection invokes a
listener callback rather than closing the physical connection; the pool
implementation can use this callback to return the physical connection
to its pool.

Connection pool implementations can then build whatever pooling policy
they want on top of this, and hand out these proxy connection handles to
their clients.

The reason you want to use this sort of approach is that the proxy
connections handed out by the driver can implement any driver extensions
as necessary, without the connection pool itself having to know about it
-- i.e. in this case the proxy connections would implement PGConnection.

I'm not familiar with Resin, but is it possible that you can just tell
it to use the driver's ConnectionPoolDataSource implementation and it'll
use this sort of strategy?

> PS: the procedure to clean up not referenced LO's would be needed also
> using LargeObjectAPI?

True, but usually if you're directly using the LO API you will be
dealing with OIDs yourself and could do the cleanup yourself via the LO
API's unlink() method from your app. If you're just using setBytes() to
manage them transparently and don't have access to the LO API, there's
no way to delete the underlying old LO when you update a field
containing a LO OID..

-O

Re: blob without largeobject api

From
Oliver Jowett
Date:
David Wall wrote:

[.. using LOs ..]

> And yes, you do need to wrap this in a transaction, but that should be
> standard operating procedure for anything but toy applications in my opinion
> since who wants to deal with the troubles of out of sync data.

And if you happen to be grabbing all the data you need in one SELECT,
and the transaction is readonly.. why would you not want to use autocommit?

We had exactly this case in our (distinctly non-toy) application here,
and it was a PITA to work around when we were using LOs. We use bytea
now, mostly because of this lack of transparency when using LOs -- with
a LO, the data isn't really part of the table, and it bites you in all
sorts of strange ways if you try to treat it as if it was "just another
field".

-O

Re: blob without largeobject api

From
Markus Schaber
Date:
Hi, Oliver,

On Wed, 07 Apr 2004 00:24:35 +1200
Oliver Jowett <oliver@opencloud.com> wrote:

> Edoardo Ceccarelli wrote:
> > Tried with a driver downloaded from http://jdbc.postgresql.org/
> >
> > pg74.213.jdbc3.jar
> >
> > the result is always the same:
> > java.lang.ClassCastException: com.caucho.sql.UserConnection
> > on row:
> > LargeObjectManager lom =
> > ((org.postgresql.PGConnection)db).getLargeObjectAPI();
>
> Ok, this means your pooling implementation is not using the postgresql
> driver's PooledConnection to generate the proxy connections to hand
> out, but is implementing them itself (using instances of
> com.caucho.sql.UserConnection, it appears). So you won't be able to
> get at postgresql-specific interfaces with that pooling
> implementation, regardless of what the driver does.

Maybe his pooling implementation provides a method to get to the
underlying "real" connection.

In one of our projects, we had to use this Trick to add the PostGIS
Wrapper Classes to the JDBC Driver.

HTH,
Markus

--
markus schaber | dipl. informatiker
logi-track ag | rennweg 14-16 | ch 8001 zürich
phone +41-43-888 62 52 | fax +41-43-888 62 53
mailto:schabios@logi-track.com | www.logi-track.com