Thread: psql client memory usage
Hi all,
I have a fairly simple query, running on a particularly large table. For illustration:
echo "select * from really_big_table;" | psql my_database > /dev/null
When I monitor the memory usage of the psql session, it continually grows. In fact, for this particularly large table – it grows to the point of consuming all swap, before the OOM killer takes steps to resolve it.
Clearly, this isn't what I'd like to happen.
My settings are:
Postgresql 9.1.9
work_mem = 256MB
effective_cache_size = 12GB
shared_buffers = 6GB
I have 24GB physical ram to play with.
Regardless of these settings however, I'm surprised that psql wouldn't release that memory as it flushes each record, but maybe I've misunderstood how memory usage works on the client side.
Tim Kane wrote: > I have a fairly simple query, running on a particularly large table. For > illustration: > > echo "select * from really_big_table;" | psql my_database > /dev/null See psql's FETCH_COUNT. From the manpage: FETCH_COUNT If this variable is set to an integer value > 0, the results of SELECT queries are fetched and displayed in groups of that many rows, rather than the default behavior of collecting the entire result set before display. Therefore only a limited amount of memory is used, regardless of the size of the result set. Best regards, -- Daniel PostgreSQL-powered mail user agent and storage: http://www.manitou-mail.org
(13/09/06 21:06), Tim Kane wrote: > Hi all, > > I have a fairly simple query, running on a particularly large table. For > illustration: > > echo "select * from really_big_table;" | psql my_database > /dev/null > > > When I monitor the memory usage of the psql session, it continually grows. > In fact, for this particularly large table it grows to the point of > consuming all swap, before the OOM killer takes steps to resolve it. > Clearly, this isn't what I'd like to happen. > > > My settings are: > Postgresql 9.1.9 > work_mem = 256MB > effective_cache_size = 12GB > shared_buffers = 6GB > > I have 24GB physical ram to play with. > This is a client side problem (not server size). See the description of FETCH_COUNT, please. http://www.postgresql.org/docs/9.2/static/app-psql.html echo "select * from really_big_table;" | psql --variable=FETCH_COUNT=100 my_database > /dev/null Regards,
Ahh. All these years (albeit sporadic), I never knew about FETCH_COUNT. That makes sense. Thanks muchly. On 06/09/2013 14:11, "Suzuki Hironobu" <hironobu@interdb.jp> wrote: >(13/09/06 21:06), Tim Kane wrote: >> Hi all, >> >> I have a fairly simple query, running on a particularly large table. >>For >> illustration: >> >> echo "select * from really_big_table;" | psql my_database > /dev/null >> >> >> When I monitor the memory usage of the psql session, it continually >>grows. >> In fact, for this particularly large table it grows to the point of >> consuming all swap, before the OOM killer takes steps to resolve it. >> Clearly, this isn't what I'd like to happen. >> >> >> My settings are: >> Postgresql 9.1.9 >> work_mem = 256MB >> effective_cache_size = 12GB >> shared_buffers = 6GB >> >> I have 24GB physical ram to play with. >> > >This is a client side problem (not server size). >See the description of FETCH_COUNT, please. >http://www.postgresql.org/docs/9.2/static/app-psql.html > > >echo "select * from really_big_table;" | psql --variable=FETCH_COUNT=100 >my_database > /dev/null > > >Regards, > > > > > >-- >Sent via pgsql-general mailing list (pgsql-general@postgresql.org) >To make changes to your subscription: >http://www.postgresql.org/mailpref/pgsql-general
On Fri, Sep 6, 2013 at 8:19 AM, Tim Kane <tim.kane@gmail.com> wrote: > Ahh. All these years (albeit sporadic), I never knew about FETCH_COUNT. > That makes sense. Thanks muchly. Not your fault: FETCH_COUNT is a hack IMO. The real issue was that libpq (until recently) forced the entire result into memory before it was returned to the caller. We can now in libpq (thanks Marko) that allows process rows as they come in. I expect soon psql will be adjusted to utilize that new API (although exactly how is unclear); runaway memory consumption in libpq/psql burns a *lot* of people. I personally find cursors to be baroque and rarely use them except internally inside pl/pgsql functions. merlin
On Sep 6, 2013, at 6:56 AM, Merlin Moncure <mmoncure@gmail.com> wrote: > On Fri, Sep 6, 2013 at 8:19 AM, Tim Kane <tim.kane@gmail.com> wrote: >> Ahh. All these years (albeit sporadic), I never knew about FETCH_COUNT. >> That makes sense. Thanks muchly. > > Not your fault: FETCH_COUNT is a hack IMO. The real issue was that > libpq (until recently) forced the entire result into memory before it > was returned to the caller. We can now in libpq (thanks Marko) that > allows process rows as they come in. I expect soon psql will be > adjusted to utilize that new API (although exactly how is unclear); what version did/does this come into effect? alan
On Mon, Sep 09/09/13, 2013 at 01:56:33PM -0700, Alan Nilsson wrote: > > On Sep 6, 2013, at 6:56 AM, Merlin Moncure <mmoncure@gmail.com> wrote: > > > On Fri, Sep 6, 2013 at 8:19 AM, Tim Kane <tim.kane@gmail.com> wrote: > >> Ahh. All these years (albeit sporadic), I never knew about FETCH_COUNT. > >> That makes sense. Thanks muchly. > > > > Not your fault: FETCH_COUNT is a hack IMO. The real issue was that > > libpq (until recently) forced the entire result into memory before it > > was returned to the caller. We can now in libpq (thanks Marko) that > > allows process rows as they come in. I expect soon psql will be > > adjusted to utilize that new API (although exactly how is unclear); > > what version did/does this come into effect? This is with PostgreSQL 9.2, see PQsetSingleRowMode: http://www.postgresql.org/docs/9.2/static/libpq-single-row-mode.html I do not expect psql will be adjusted to utilize the new API: http://www.postgresql.org/message-id/CAEYkp92z2w3VBs4uxWPwub7k4hGW-vEPW_WNSui9R5T+cGpLYw@mail.gmail.com -Ryan Kelly