Andreas Prohaska wrote:
> Hi!
>
> What's the difference in the current (7.3.4) JDBC driver between
> ordinary JDBC Blobs and the LargeObject API in matters of *streaming*
> data into the DB? I don't want to allocate a buffer in the Java VM to
> hold the whole blob and send the blob in one piece. Rather I would like
> to work with a small buffer (~4kb) and stream the blob from the
> filesystem into the db.
>
First we don't have Large Objects proper because PostgreSQL does not yet
has the SQL-LOBs. It will be in 7.5.
We have two things:
1) An extension to the JDBC to deal with the PostgreSQL own type of
Large Objects. Streaming is used as you want. But this type has
security issues (that may or may not be of importance to your application).
2) The VARBINARY type is being mapped to the PostgreSQL bytea type which
can handle a large size. The community version of the driver stores
things in memory, converts it to ASCII and usually runs out of memory.
The patch to work around the problems in 2 for 7.3 backends was posted
on the list. Feel free to use it.
On 7.4 this problem will be solved by the new V3 protocol. Red Hat is
producing a patch to solve this problem for 7.4 backends.
> Looking at the AbstractJdbc2Blob class I think that JDBC Blobs internally
> use LargeObjects. As I know, this was not the case in earlier versions
> of the driver. Am I right?
>
This is for the PostgreSQL Large Objects, not the standard JDBC and SQL
BLOBs.
> So far, I'm using LargeObjects and everything works fine, but I intend to
> use c-jdbc for db replication and would have to use JDBC blobs then.
>
We don't have them yet because PostgreSQL does not have them. But I
believe c-jdbc works with PostgreSQL so there must be a way around it.
--
Fernando Nasser
Red Hat Canada Ltd. E-Mail: fnasser@redhat.com
2323 Yonge Street, Suite #300
Toronto, Ontario M4P 2C9