Re: BUG #1756: PQexec eats huge amounts of memory - Mailing list pgsql-bugs

From Alvaro Herrera
Subject Re: BUG #1756: PQexec eats huge amounts of memory
Date
Msg-id 20050711003836.GA31881@alvh.no-ip.org
Whole thread Raw
In response to BUG #1756: PQexec eats huge amounts of memory  ("Denis Vlasenko" <vda@ilport.com.ua>)
Responses Re: BUG #1756: PQexec eats huge amounts of memory
Re: BUG #1756: PQexec eats huge amounts of memory
List pgsql-bugs
On Sun, Jul 10, 2005 at 01:05:10PM +0300, Denis Vlasenko wrote:
> On Thursday 07 July 2005 20:43, Alvaro Herrera wrote:

> > Really?  I thought what really happened is you had to get the results
> > one at a time using the pg_fetch family of functions.  If that is true,
> > then it's possible to make the driver fake having the whole table by
> > using a cursor.  (Even if PHP doesn't do it, it's possible for OCI to do
> > it behind the scenes.)
>
> Even without cursor, result can be read incrementally.
>
> I mean, query result is transferred over network, right?
> We just can stop read()'ing before we reached the end of result set,
> and continue at pg_fetch as needed.

It's not that simple.  libpq is designed to read whole result sets at a
time; there's no support for reading incrementally from the server.
Other problem is that neither libpq nor the server know how many tuples
the query will return, until the whole query is executed.  Thus,
pg_numrows (for example) wouldn't work at all, which is a showstopper
for many PHP scripts.

In short, it can be made to work, but it's not as simple as you put it.

--
Alvaro Herrera (<alvherre[a]alvh.no-ip.org>)
"Industry suffers from the managerial dogma that for the sake of stability
and continuity, the company should be independent of the competence of
individual employees."                                      (E. Dijkstra)

pgsql-bugs by date:

Previous
From: Michael Fuhr
Date:
Subject: Re: BUG #1762: Integer multiplication error
Next
From: Oliver Jowett
Date:
Subject: Re: BUG #1756: PQexec eats huge amounts of memory