Re: Prevent out of memory errors by reducing work_mem? - Mailing list pgsql-general

From Tom Lane
Subject Re: Prevent out of memory errors by reducing work_mem?
Date
Msg-id 10644.1359124947@sss.pgh.pa.us
Whole thread Raw
In response to Prevent out of memory errors by reducing work_mem?  (Jan Strube <js@deriva.de>)
Responses Re: Prevent out of memory errors by reducing work_mem?
List pgsql-general
Jan Strube <js@deriva.de> writes:
> I'm getting an out of memory error running the following query over 6
> tables (the *BASE* tables have over 1 million rows each) on Postgresql
> 9.1. The machine has 4GB RAM:

It looks to me like you're suffering an executor memory leak that's
probably unrelated to the hash joins as such.  The leak is in the
ExecutorState context:

> ExecutorState: 3442985408 total in 412394 blocks; 5173848 free (16
> chunks); 3437811560 used

while the subsidiary HashXYZ contexts don't look like they're going
beyond what they've been told to.

So the first question is 9.1.what?  We've fixed execution-time memory
leaks as recently as 9.1.7.

If you're on 9.1.7, or if after updating you can still reproduce the
problem, please see if you can create a self-contained test case.
My guess is it would have to do with the specific data types and
operators being used in the query, but not so much with the specific
data, so you probably could create a test case that just uses tables
filled with generated random data.

            regards, tom lane


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


pgsql-general by date:

Previous
From: Cliff de Carteret
Date:
Subject: Re: Throttling Streamming Replication
Next
From: Rodrigo Pereira da Silva
Date:
Subject: Re: Throttling Streamming Replication