Re: psql large RSS (1.6GB) - Mailing list pgsql-performance

From Dustin Sallings
Subject Re: psql large RSS (1.6GB)
Date
Msg-id 8132EBC8-2B02-11D9-A2DE-000A957659CC@spy.net
Whole thread Raw
In response to psql large RSS (1.6GB)  (TTK Ciar <ttk2@hardpoint.ciar.org>)
Responses Re: psql large RSS (1.6GB)
List pgsql-performance
On Oct 27, 2004, at 0:57, TTK Ciar wrote:

>   At a guess, it looks like the data set is being buffered in its
> entirety by psql, before any data is written to the output file,
> which is surprising.  I would have expected it to grab data as it
> appeared on the socket from postmaster and write it to disk.  Is
> there something we can do to stop psql from buffering results?
> Does anyone know what's going on here?

    Yes, the result set is sent back to the client before it can be used.
An easy workaround when dealing with this much data is to use a cursor.
  Something like this:

db# start transaction;
START TRANSACTION
db# declare logcur cursor for select * from some_table;
DECLARE CURSOR
db# fetch 5 in logcur;
[...]
(5 rows)

    This will do approximately what you expected the select to do in the
first, place, but the fetch will decide how many rows to buffer into
the client at a time.

>   If the solution is to just write a little client that uses perl
> DBI to fetch rows one at a time and write them out, that's doable,
> but it would be nice if psql could be made to "just work" without
> the monster RSS.

    It wouldn't make a difference unless that driver implements the
underlying protocol on its own.

--
SPY                      My girlfriend asked me which one I like better.
pub  1024/3CAE01D5 1994/11/03 Dustin Sallings <dustin@spy.net>
|    Key fingerprint =  87 02 57 08 02 D0 DA D6  C8 0F 3E 65 51 98 D8 BE
L_______________________ I hope the answer won't upset her. ____________


pgsql-performance by date:

Previous
From: TTK Ciar
Date:
Subject: psql large RSS (1.6GB)
Next
From: Markus Bertheau
Date:
Subject: Re: psql large RSS (1.6GB)