Re: binary data in `bytea' column - Mailing list pgsql-jdbc

From Oliver Jowett
Subject Re: binary data in `bytea' column
Date
Msg-id 40844DE7.9070802@opencloud.com
Whole thread Raw
In response to binary data in `bytea' column  (Anton Komarevtsev <avkx@rt.mipt.ru>)
Responses Re: binary data in `bytea' column  (Anton Komarevtsev <avkx@rt.mipt.ru>)
List pgsql-jdbc
Anton Komarevtsev wrote:
> Hello.
>
> It's nesessary for me to store bianary data (images, sounds, texts as
> bytes) in the database. What should I use for better scalability,
> generic `PreparedStatement.setBinaryStream' and
> `SesultSet.getBinaryStream' or Large Objects?

How big is each piece of binary data? How much memory on the JVM side
can you throw at the problem?

If each chunk of data is bigger than around 10% of available memory,
you'll want to use LOs. Otherwise bytea + setBinaryStream should work
and is simpler to use.

The current driver will require 6-7 times the raw data size in
additional memory when binary data is used as a query parameter, and
roughly the same (I think) when binary data is received as a query
result. I am working on some changes that reduce the parameter overhead
to essentially zero (just waiting on some testing results); however the
query result overhead is going to remain for a while since we really
need the v3 protocol to fix it.

> I use Postgres 7.4, and autocommit=on is hardcoded in it.

7.4 doesn't support autocommit=off on the server side, as I understand
it -- is this what you mean?

> But I've
> listened, that Large Objects may only work in a single transaction. So,
> if I should use Large Objects (if so), how should I use them?

You can still change the JDBC-level autocommit setting safely (via
Connection.setAutoCommit) regardless of the server's autocommit
capabilities.

-O

pgsql-jdbc by date:

Previous
From: Anton Komarevtsev
Date:
Subject: binary data in `bytea' column
Next
From: Ron St-Pierre
Date:
Subject: Make not working (on RHE)