Re: [INTERFACES] Front end memory consumption in SELECT - Mailing list pgsql-interfaces

From Douglas Thomson
Subject Re: [INTERFACES] Front end memory consumption in SELECT
Date
Msg-id 199911210614.RAA02176@mugca.cc.monash.edu.au
Whole thread Raw
In response to Re: [INTERFACES] Front end memory consumption in SELECT  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: [INTERFACES] Back end memory consumption (was Front end memory consumption in SELECT)  (Douglas Thomson <dougt@mugc.cc.monash.edu.au>)
Re: [INTERFACES] Back end memory consumption (was Front end memory consumption in SELECT)  (Douglas Thomson <dougt@mugc.cc.monash.edu.au>)
List pgsql-interfaces
Tom Lane <tgl@sss.pgh.pa.us> writes:
> Douglas Thomson <dougt@mugc.cc.monash.edu.au> writes:
> > However, judging by the memory consumption of my front-end process,
> > it would seem that the SELECT is loading the entire table into memory
> > before I even fetch the first row! Can anyone confirm that this is in
> > fact what goes on?
>
> libpq handles SELECTs that way.  You should consider DECLARE CURSOR
> and FETCH if you need to retrieve a large query result in chunks.

Thanks for this suggestion, it has solved my problem very nicely.

In case any Perl interface users are reading this and interested,
this meant changing my old code that used to work like:
    my $result = $conn->exec("SELECT * from big_table");
    $result->resultStatus == PGRES_TUPLES_OK or die(...);
    while (my @tuple = $result->fetchrow) {
        ...
    }
to instead work like:
    my $result = $conn->exec(
            "DECLARE big_cursor CURSOR FOR SELECT * FROM big_table");
    $result->resultStatus == PGRES_COMMAND_OK or die(...);
    do {
        $result = $conn->exec("FETCH 100 FROM big_cursor");
        $result->resultStatus == PGRES_TUPLES_OK or die(...);
        while (my @tuple = $result->fetchrow) {
            ...
        }
    } while ($result->ntuples == 100);
    $conn->exec("CLOSE big_cursor");  # if not about to COMMIT anyway
which limits to 100 the number of tuples in memory at any one time.

I only found two cases that I could not handle this way: SELECT FOR
UPDATE (since PostgreSQL cursors are read-only) and selects done
outside of transactions. In my application both these exceptions only
affected quite small selections (usually just one tuple) so I was
able to revert to the non-cursor select with no problem.

For what it is worth, I measured no significant difference in the
total time taken to traverse the entire table, but the reduction in
memory consumption was dramatic. Note that fetching only a single
tuple at a time (instead of the 100 shown above) did take much
longer.

Hope this information helps someone,

Doug.

pgsql-interfaces by date:

Previous
From: coyotitos_house@juno.com
Date:
Subject: ECPG FETCH [number|ALL] IN cursor_name... with host variable
Next
From: "Sergio A. Kessler"
Date:
Subject: Re: [INTERFACES] pg_pwd