Re: is there a way to firmly cap postgres worker memory consumption? - Mailing list pgsql-general

From Steve Kehlet
Subject Re: is there a way to firmly cap postgres worker memory consumption?
Date
Msg-id CA+bfosEHQRtpySEc203XUq0bEz3jbgb=c0dRSZo6AOTzOasFSA@mail.gmail.com
Whole thread Raw
In response to Re: is there a way to firmly cap postgres worker memory consumption?  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-general
On Tue, Apr 8, 2014 at 6:32 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Well, here's the problem:
>         ExprContext: 812638208 total in 108 blocks; 183520 free (171
> chunks); 812454688 used

So something involved in expression evaluation is eating memory.
Looking at the query itself, I'd have to bet on this:

>            ARRAY_TO_STRING(ARRAY_AGG(MM.ID::CHARACTER VARYING), ',')

My guess is that this aggregation is being done across a lot more rows
than you were expecting, and the resultant array/string therefore eats
lots of memory.  You might try replacing that with COUNT(*), or even
better SUM(LENGTH(MM.ID::CHARACTER VARYING)), just to get some definitive
evidence about what the query is asking to compute.

The devs have moved on and want to stick with their new query, so I'll just chalk this up to a bad query and let it go. But I'm glad to have learned a few new tricks, thanks.
 
Meanwhile, it seems like ulimit -v would provide the safety valve
you asked for originally.

Thank you Amador and Tom for the ulimit solution, that's exactly what I needed.

pgsql-general by date:

Previous
From: Michael Paquier
Date:
Subject: Re: import .sql file into PostgreSQL database
Next
From: Amit Langote
Date:
Subject: About upgrading a (tuple?) lock in a rollback'd sub-transaction