Re: Large selects handled inefficiently? - Mailing list pgsql-general

From Chris
Subject Re: Large selects handled inefficiently?
Date
Msg-id 39ADDEDE.3BFB4EB7@nimrod.itg.telecom.com.au
Whole thread Raw
In response to Large selects handled inefficiently?  (Jules Bean <jules@jellybean.co.uk>)
Responses Re: Large selects handled inefficiently?  (Jules Bean <jules@jellybean.co.uk>)
List pgsql-general
Jules Bean wrote:
>
> On Thu, Aug 31, 2000 at 12:22:36AM +1000, Andrew Snow wrote:
> >
> > > I believe I can work around this problem using cursors (although I
> > > don't know how well DBD::Pg copes with cursors).  However, that
> > > doesn't seem right -- cursors should be needed to fetch a large query
> > > without having it all in memory at once...
> >
> > Actually, I think thats why cursors were invented in the first place ;-)  A
> > cursor is what you are using if you're not fetching all the results of a
> > query.
>
> I really can't agree with you there.
>
> A cursor is another slightly foolish SQL hack.

Not quite, but it is true that this is a flaw in postgres. It has been
discussed on hackers from time to time about implementing a "streaming"
interface. This means that the client doesn't absorb all the results
before allowing access to the results. You can start processing results
as and when they become available by blocking in the client. The main
changes would be to the libpq client library, but there would be also
other issues to address like what happens if an error happens half way
through. In short, I'm sure this will be fixed at some stage, but for
now cursors is the only real answer.

pgsql-general by date:

Previous
From: "Travis Bauer"
Date:
Subject: Error with tcp/ip networking
Next
From: Tom Lane
Date:
Subject: Re: Error with tcp/ip networking