Gunther Schadow <gunther@aurora.regenstrief.org> writes:
> The constructive responses suggested that I use LIMIT/OFFSET and
> CURSORs. I can see how that could be a workaround the problem, but
> I still believe that something is wrong with the PostgreSQL query
> executer. Loading the entire result set into a buffer without
> need just makes no sense.
The Postgres backend does not do that. Most of the frontend client-side
libraries do, but feel free to write one that does not.
Offhand I think the only really serious downside to letting the
application code process the result in a streaming fashion is that the
application would have to be prepared to undo whatever it's done so far,
if it gets an error report partway through the result set. An example:
SELECT 1/x FROM foo;
where foo.x contains zeroes here and there. You'll get some rows out
before the query is abandoned with a divide-by-zero error. By
accumulating the result set, the existing libraries are able to offer a
cleaner yes-it-succeeded or no-it-didn't API. But this is surely not a
fatal objection, just a necessary piece of a less-clean streaming API.
> [snip]
> And of course, the point is that it makes
> sense to design the server such that streaming results to the
> client is transparent because it automatically relieves the strain
> on all resources, CPU, storage and network! Isn't that obvious?
The reason you got brush-off responses before was that you went into
lecture mode when you clearly hadn't spent any effort studying Postgres
internals. You're still doing that...
regards, tom lane