Re: [PERFORM] Correct use of cursors for very large result sets in Postgres - Mailing list pgsql-performance

From Tom Lane
Subject Re: [PERFORM] Correct use of cursors for very large result sets in Postgres
Date
Msg-id 26739.1487439829@sss.pgh.pa.us
Whole thread Raw
In response to Re: [PERFORM] Correct use of cursors for very large result sets in Postgres  (Mike Beaton <mjsbeaton@gmail.com>)
Responses Re: [PERFORM] Correct use of cursors for very large result sets in Postgres
List pgsql-performance
Mike Beaton <mjsbeaton@gmail.com> writes:
> One outstanding question I have. Based on a lot of helpful responses given
> to the SO question I can now test and see what disk buffers are generated
> (by setting `log_temp_files` to `0` and then `tail -f log`), as well as how
> long it takes for results to start arriving.

> With a large (10,000,000 row) test table, if I do `SELECT * FROM table` on
> psql it starts to return results immediately with no disk buffer. If I do
> `FETCH ALL FROM cursortotable` on psql it takes about 7.5 seconds to start
> returning results, and generates a 14MB buffer. If I do `SELECT * FROM
> table` on a correctly coded streaming client, it also starts to return
> results immediately with no disk buffer. But if I do `FETCH ALL FROM
> cursortotable` from my streaming client, it takes about 1.5 seconds for
> results to start coming... but again with no disk buffer, as hoped

Seems odd.  Is your cursor just on "SELECT * FROM table", or is there
some processing in there you're not mentioning?  Maybe it's a cursor
WITH HOLD and you're exiting the source transaction?

            regards, tom lane


pgsql-performance by date:

Previous
From: Mike Beaton
Date:
Subject: Re: [PERFORM] Correct use of cursors for very large result sets in Postgres
Next
From: Mike Beaton
Date:
Subject: Re: [PERFORM] Correct use of cursors for very large result sets in Postgres