Re: psql memory usage when creating delimited files - Mailing list pgsql-general

From Volkan YAZICI
Subject Re: psql memory usage when creating delimited files
Date
Msg-id 20060918182146.GA1418@alamut
Whole thread Raw
In response to psql memory usage when creating delimited files  (David Brain <dbrain@bandwidth.com>)
List pgsql-general
On Sep 18 01:52, David Brain wrote:
> I had in interesting issue the other day while trying to generate
> delimited files from a query in psql, using:
>
> \f'|'
> \t
> \a
> \o out.file
> select * from really_big_table sort by createddate;
>
> This quantity of data involved here is fairly large (maybe 2-4GB).
> Watching the memory usage, the postmaster consumed a fair chunk of RAM
> (obviously) while running the query, but I was surprised to see psql
> start taking increasingly large quantities or RAM, to the point that in
> the end the machines memory was exhausted, postmaster died (and
> restarted OK) causing psql to quit.
>
> I was surprised in that I assumed that psql would just be taking rows
> from postmaster and writing them to disk, hence requiring very little
> RAM, but it appeared that it tried to load the data into memory.

See "BUG #1756: PQexec eats huge amounts of memory"[1] and "SELECT very
slow"[2] threads in the ml archives.

[1] http://archives.postgresql.org/pgsql-bugs/2005-07/msg00074.php
[2] http://archives.postgresql.org/pgsql-sql/2005-06/msg00118.php

> Is there some option I'm missing in my export script that would prevent
> this happening?  I managed to work around the issue by issuing a number
> of smaller queries, but that's not something I want to do on a regular
> basis.

Use COPY instead. (You can also prefer CURSORs in your client
applications. See related discussions.)


Regards.

pgsql-general by date:

Previous
From: David Brain
Date:
Subject: psql memory usage when creating delimited files
Next
From: "Jack Orenstein"
Date:
Subject: Re: Initializing Datums for use with SPI_execute_plan