Thread: memory explosion on planning complex query

memory explosion on planning complex query

From
Andrew Dunstan
Date:
Attached is some anonymized DDL for a fairly complex schema from a
PostgreSQL Experts client. Also attached is an explain query that runs
against the schema. The client's problem is that in trying to run the
explain, Postgres simply runs out of memory. On my untuned 9.3 test rig,
(Scientific Linux 6.4 with 24Gb of RAM and 24Gb of swap) vmstat clearly
shows the explain chewing up about 7Gb of memory. When it's done the
free memory jumps back to where it was. On a similar case on the clients
test rig we saw memory use jump lots more.

The client's question is whether this is not a bug. It certainly seems
like it should be possible to plan a query without chewing up this much
memory, or at least to be able to limit the amount of memory that can be
grabbed during planning. Going from humming along happily to OOM
conditions all through running "explain <somequery>" is not very friendly.

cheers

andrew

Attachment

Re: memory explosion on planning complex query

From
Peter Geoghegan
Date:
On Wed, Nov 26, 2014 at 2:00 PM, Andrew Dunstan <andrew@dunslane.net> wrote:
> The client's question is whether this is not a bug. It certainly seems like
> it should be possible to plan a query without chewing up this much memory,
> or at least to be able to limit the amount of memory that can be grabbed
> during planning. Going from humming along happily to OOM conditions all
> through running "explain <somequery>" is not very friendly.


Have you tried this with a "#define SHOW_MEMORY_STATS" build, or
otherwise rigged Postgres to call MemoryContextStats() at interesting
times?

-- 
Peter Geoghegan



Re: memory explosion on planning complex query

From
Tomas Vondra
Date:
On 26.11.2014 23:26, Peter Geoghegan wrote:
> On Wed, Nov 26, 2014 at 2:00 PM, Andrew Dunstan <andrew@dunslane.net> wrote:
>> The client's question is whether this is not a bug. It certainly seems like
>> it should be possible to plan a query without chewing up this much memory,
>> or at least to be able to limit the amount of memory that can be grabbed
>> during planning. Going from humming along happily to OOM conditions all
>> through running "explain <somequery>" is not very friendly.
> 
> 
> Have you tried this with a "#define SHOW_MEMORY_STATS" build, or
> otherwise rigged Postgres to call MemoryContextStats() at interesting
> times?

FWIW, this does the trick on a regular build:
  gdb -batch -x gdb.cmd -p $PID

where gdb.cmd is a file with a single line:
  p MemoryContextStats(TopMemoryContext)

Just execute it at the interesting moment when a lot of memory is consumed.

Tomas



Re: memory explosion on planning complex query

From
Antonin Houska
Date:
On 11/26/2014 11:00 PM, Andrew Dunstan wrote:
> 
> Attached is some anonymized DDL for a fairly complex schema from a 
> PostgreSQL Experts client. Also attached is an explain query that runs 
> against the schema. The client's problem is that in trying to run the 
> explain, Postgres simply runs out of memory. On my untuned 9.3 test rig, 
> (Scientific Linux 6.4 with 24Gb of RAM and 24Gb of swap) vmstat clearly 
> shows the explain chewing up about 7Gb of memory. When it's done the 
> free memory jumps back to where it was. On a similar case on the clients 
> test rig we saw memory use jump lots more.
> 
> The client's question is whether this is not a bug. It certainly seems 
> like it should be possible to plan a query without chewing up this much 
> memory, or at least to be able to limit the amount of memory that can be 
> grabbed during planning. Going from humming along happily to OOM 
> conditions all through running "explain <somequery>" is not very friendly.

It's not trivial to track the whole hierarchy of views, but I think it
can result in the FROM list or some JOIN lists being too long. How about
setting from_collapse_limit / join_collapse_limit to lower-than-default
value ?

-- 
Antonin Houska
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: http://www.postgresql-support.de, http://www.cybertec.at



Re: memory explosion on planning complex query

From
Andrew Dunstan
Date:
On 11/26/2014 05:26 PM, Peter Geoghegan wrote:
> On Wed, Nov 26, 2014 at 2:00 PM, Andrew Dunstan <andrew@dunslane.net> wrote:
>> The client's question is whether this is not a bug. It certainly seems like
>> it should be possible to plan a query without chewing up this much memory,
>> or at least to be able to limit the amount of memory that can be grabbed
>> during planning. Going from humming along happily to OOM conditions all
>> through running "explain <somequery>" is not very friendly.
>
> Have you tried this with a "#define SHOW_MEMORY_STATS" build, or
> otherwise rigged Postgres to call MemoryContextStats() at interesting
> times?
>

No, but I can. Good idea, thanks.

cheers

andrew




Re: memory explosion on planning complex query

From
Andrew Dunstan
Date:
On 11/26/2014 05:00 PM, Andrew Dunstan wrote:
>
> Attached is some anonymized DDL for a fairly complex schema from a 
> PostgreSQL Experts client. Also attached is an explain query that runs 
> against the schema. The client's problem is that in trying to run the 
> explain, Postgres simply runs out of memory. On my untuned 9.3 test 
> rig, (Scientific Linux 6.4 with 24Gb of RAM and 24Gb of swap) vmstat 
> clearly shows the explain chewing up about 7Gb of memory. When it's 
> done the free memory jumps back to where it was. On a similar case on 
> the clients test rig we saw memory use jump lots more.
>
> The client's question is whether this is not a bug. It certainly seems 
> like it should be possible to plan a query without chewing up this 
> much memory, or at least to be able to limit the amount of memory that 
> can be grabbed during planning. Going from humming along happily to 
> OOM conditions all through running "explain <somequery>" is not very 
> friendly.
>

Further data point - thanks to Andrew Gierth (a.k.a. RhodiumToad) for 
pointing this out. The query itself grabs about 600Mb to 700Mb to run, 
whereas the EXPLAIN takes vastly more - on my system 10 times more. 
Surely that's not supposed to happen?

cheers

andrew




Re: memory explosion on planning complex query

From
Robert Haas
Date:
On Wed, Nov 26, 2014 at 7:24 PM, Andrew Dunstan <andrew@dunslane.net> wrote:
> On 11/26/2014 05:00 PM, Andrew Dunstan wrote:
>> Attached is some anonymized DDL for a fairly complex schema from a
>> PostgreSQL Experts client. Also attached is an explain query that runs
>> against the schema. The client's problem is that in trying to run the
>> explain, Postgres simply runs out of memory. On my untuned 9.3 test rig,
>> (Scientific Linux 6.4 with 24Gb of RAM and 24Gb of swap) vmstat clearly
>> shows the explain chewing up about 7Gb of memory. When it's done the free
>> memory jumps back to where it was. On a similar case on the clients test rig
>> we saw memory use jump lots more.
>>
>> The client's question is whether this is not a bug. It certainly seems
>> like it should be possible to plan a query without chewing up this much
>> memory, or at least to be able to limit the amount of memory that can be
>> grabbed during planning. Going from humming along happily to OOM conditions
>> all through running "explain <somequery>" is not very friendly.
>>
>
> Further data point - thanks to Andrew Gierth (a.k.a. RhodiumToad) for
> pointing this out. The query itself grabs about 600Mb to 700Mb to run,
> whereas the EXPLAIN takes vastly more - on my system 10 times more. Surely
> that's not supposed to happen?

Hmm.  So you can run the query but you can't EXPLAIN it?

That sounds like it could well be a bug, but I'm thinking you might
have to instrument palloc() to find out where all of that space is
being allocated to figure out why it's happening - or maybe connect
gdb to the server while the EXPLAIN is chewing up memory and pull some
backtraces to figure out what section of code it's stuck in.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company