Re: SUBSTRING performance for large BYTEA - Mailing list pgsql-general

From Vance Maverick
Subject Re: SUBSTRING performance for large BYTEA
Date
Msg-id DAA9CBC6D4A7584ABA0B6BEA7EC6FC0B5D31FD@hq-exch01.corp.pgp.com
Whole thread Raw
In response to Re: SUBSTRING performance for large BYTEA  ("Joshua D. Drake" <jd@commandprompt.com>)
Responses Re: SUBSTRING performance for large BYTEA  (Karsten Hilbert <Karsten.Hilbert@gmx.net>)
List pgsql-general
Karsten Hilbert writes:
> Well, in my particular case it isn't so much that I *want*
> to access bytea in chunks but rather that under certain
> not-yet-pinned-down circumstances windows clients tend to go
> out-or-memory on the socket during *retrieval* (insertion is
> fine, as is put/get access from Linux clients). Doing
> chunked retrieval works on those boxen, too, so it's an
> option in our application (the user defines a chunk size
> that works, a size of 0 is treated as no-chunking).

This is my experience with a Java client too.  Writing the data with
PreparedStatement.setBinaryStream works great for long strings, but
reading it with the complementary method ResultSet.getBinaryStream runs
into the memory problem, killing the Java VM.

Thanks to all for the useful feedback.  I'm going to post a note to the
JDBC list as well to make this easier to find in the future.

    Vance

pgsql-general by date:

Previous
From: "Felix Ji"
Date:
Subject: query large amount of data in c++ using libpq
Next
From: "Phoenix Kiula"
Date:
Subject: Postgresql performance in production environment