Below is a post from 2000. It describes a limitation processing large
selects with libpq. Does this limitation still exist?
Thanks in advance,
Bob Gilson
Kirby Bohling wrote:>>I am trying to run a select statement, and I keep running out of>>memory. After a little
investigate,I found>>that libpq appears to get the entire result set at once.
The bottleneck here is mainly that libpq's API is defined in terms of
providing random access to a result set, no matter how large --- so
libpq has to buffer the whole result set in client memory.
Aside from random access there are also error-reporting issues.
Currently libpq guarantees to tell you about any errors encountered
during a query before you start to read result rows. That guarantee
wouldn't hold in a streaming-results scenario.
These issues have been discussed quite a few times before --- see the
pg-interfaces archives. I think everyone agrees that it'd be a good
idea to have a streamable libpq interface, but no one's stepped up to
the plate to define or implement one...
regards, tom lane