On Monday 11 July 2005 03:38, Alvaro Herrera wrote:
> On Sun, Jul 10, 2005 at 01:05:10PM +0300, Denis Vlasenko wrote:
> > On Thursday 07 July 2005 20:43, Alvaro Herrera wrote:
>
> > > Really? I thought what really happened is you had to get the results
> > > one at a time using the pg_fetch family of functions. If that is true,
> > > then it's possible to make the driver fake having the whole table by
> > > using a cursor. (Even if PHP doesn't do it, it's possible for OCI to do
> > > it behind the scenes.)
> >
> > Even without cursor, result can be read incrementally.
> >
> > I mean, query result is transferred over network, right?
> > We just can stop read()'ing before we reached the end of result set,
> > and continue at pg_fetch as needed.
>
> It's not that simple. libpq is designed to read whole result sets at a
> time; there's no support for reading incrementally from the server.
> Other problem is that neither libpq nor the server know how many tuples
> the query will return, until the whole query is executed. Thus,
> pg_numrows (for example) wouldn't work at all, which is a showstopper
> for many PHP scripts.
>
> In short, it can be made to work, but it's not as simple as you put it.
This sounds reasonable.
Consider my posts in this thread as user wish to
* libpq and network protocol to be changed to allow for incremental reads
of executed queries and for multiple outstanding result sets,
or, if above thing looks unsurmountable at the moment,
* libpq-only change as to allow incremental reads of single outstanding
result set. Attempt to use pg_numrows, etc, or attempt to execute
another query forces libpq to read and store all remaining rows
in client's memory (i.e. current behaviour).
--
vda