Re: is there a way to firmly cap postgres worker memory consumption? - Mailing list pgsql-general

From Tom Lane
Subject Re: is there a way to firmly cap postgres worker memory consumption?
Date
Msg-id 1161.1397007136@sss.pgh.pa.us
Whole thread Raw
In response to Re: is there a way to firmly cap postgres worker memory consumption?  (Steve Kehlet <steve.kehlet@gmail.com>)
Responses Re: is there a way to firmly cap postgres worker memory consumption?  (Steve Kehlet <steve.kehlet@gmail.com>)
List pgsql-general
Steve Kehlet <steve.kehlet@gmail.com> writes:
> Thank you. For some reason I couldn't get it to trip with "ulimit -d
> 51200", but "ulimit -v 1572864" (1.5GiB) got me this in serverlog. I hope
> this is readable, if not it's also here:

Well, here's the problem:

>         ExprContext: 812638208 total in 108 blocks; 183520 free (171
> chunks); 812454688 used

So something involved in expression evaluation is eating memory.
Looking at the query itself, I'd have to bet on this:

>            ARRAY_TO_STRING(ARRAY_AGG(MM.ID::CHARACTER VARYING), ',')

My guess is that this aggregation is being done across a lot more rows
than you were expecting, and the resultant array/string therefore eats
lots of memory.  You might try replacing that with COUNT(*), or even
better SUM(LENGTH(MM.ID::CHARACTER VARYING)), just to get some definitive
evidence about what the query is asking to compute.

Meanwhile, it seems like ulimit -v would provide the safety valve
you asked for originally.  I too am confused about why -d didn't
do it, but as long as you found a variant that works ...

            regards, tom lane


pgsql-general by date:

Previous
From: Steve Kehlet
Date:
Subject: Re: is there a way to firmly cap postgres worker memory consumption?
Next
From: Sameer Kumar
Date:
Subject: Re: Remote troubleshooting session connection?