Re: [PERFORM] Correct use of cursors for very large result sets in Postgres - Mailing list pgsql-performance

From Tom Lane
Subject Re: [PERFORM] Correct use of cursors for very large result sets in Postgres
Date
Msg-id 17679.1487683929@sss.pgh.pa.us
Whole thread Raw
In response to Re: [PERFORM] Correct use of cursors for very large result sets in Postgres  (Mike Beaton <mjsbeaton@gmail.com>)
Responses Re: [PERFORM] Correct use of cursors for very large result sets in Postgres
List pgsql-performance
Mike Beaton <mjsbeaton@gmail.com> writes:
> New TL;DR (I'm afraid): PostgreSQL is always generating a huge buffer file
> on `FETCH ALL FROM CursorToHuge`.

I poked into this and determined that it's happening because pquery.c
executes FETCH statements the same as it does with any other
tuple-returning utility statement, ie "run it to completion and put
the results in a tuplestore, then send the tuplestore contents to the
client".  I think the main reason nobody worried about that being
non-optimal was that we weren't expecting people to FETCH very large
amounts of data in one go --- if you want the whole query result at
once, why are you bothering with a cursor?

This could probably be improved, but it would (I think) require inventing
an additional PortalStrategy specifically for FETCH, and writing
associated code paths in pquery.c.  Don't know when/if someone might get
excited enough about it to do that.

            regards, tom lane


pgsql-performance by date:

Previous
From: Mike Beaton
Date:
Subject: Re: [PERFORM] Correct use of cursors for very large result sets in Postgres
Next
From: Mike Beaton
Date:
Subject: Re: [PERFORM] Correct use of cursors for very large result sets in Postgres