Hi everyone,
I have a table containing file contents in bytea columns.
The functionality I am trying to achieve is having a result set
containing such columns, iterating over them and streaming them while
zipping them.
The problem is that I get ByteArrayInputStream from
ResultSet.getBinaryStream.
Thus iterating over many rows, each containing more than 10MB of data
smashes the heap. In peak times I will have several such processes.
I am using postgresql-8.4-702.jdbc3.jar against a PG 8.4.5 installation.
I looked at the current source of driver.
Jdbc3ResultSet extends AbstractJdbc3ResultSet extends
AbstractJdbc2ResultSet which is the place that provides implementation
for getBinaryStream which returns ByteArrayInputStream on bytea
columns, and BlobInputStream on blob columns. On skimming it seems that
BlobInputStream does indeed stream the bytes instead of reading them in
memory (chunks for reads are 4k).
So what am I options? Refactor the DB schema to use blobs rather than
bytea? Is it impossible to have bytea read in chunks?
Kind regards:
al_shopov