Re: [INTERFACES] Managing the memory requierments of large query results - Mailing list pgsql-interfaces

From Tom Lane
Subject Re: [INTERFACES] Managing the memory requierments of large query results
Date
Msg-id 5288.950752025@sss.pgh.pa.us
Whole thread Raw
In response to Managing the memory requierments of large query results  ("Bryan White" <bryan@arcamax.com>)
List pgsql-interfaces
"Bryan White" <bryan@arcamax.com> writes:
> It is my understanding that when a query is issued the backend runs the
> query and accumulates the results in memory and when it completes it
> transmits the entire result set to the front end.

No, the backend does not accumulate the result; it transmits tuples to
the frontend on-the-fly.  The current implementation of frontend libpq
does buffer the result rows on the frontend side, because it presents a
random-access-into-the-query-result API to the client application.
(There's been talk of offering an alternative API that eliminates the
buffering and the random-access option, but nothing's been done yet.)

> I have studied the documentation and found Cursors and Asyncronous Query
> Processing.  Cursors seems to solve the problem on the front end but I get
> the impression the back end will buffer the entire result until the cursor
> is closed.

A cursor should solve the problem just fine.  If you can put your finger
on what part of the documentation misled you, maybe we can improve it.

> Asyncronous Query Processing as I understand it is more about not blocking
> the client during the query and it does not fundementally alter the result
> buffering on either end.

Correct, it just lets a single-threaded client continue to do other
stuff while waiting for the (whole) result to arrive.
        regards, tom lane


pgsql-interfaces by date:

Previous
From: "Oliver Elphick"
Date:
Subject: pgaccess and multibyte-enabled libpq
Next
From: "Bryan White"
Date:
Subject: Re: [INTERFACES] Managing the memory requierments of large query results