Re: Running out of memory while making a join - Mailing list pgsql-general

From Carlos Henrique Reimer
Subject Re: Running out of memory while making a join
Date
Msg-id CAJnnue325hB738sO_HYQdUGrEEBT8zqDGfY+JB8d5JZ2K_KK=Q@mail.gmail.com
Whole thread Raw
In response to Re: Running out of memory while making a join  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: Running out of memory while making a join  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-general
Hi Tom,

Thank you for the analyzes!

No problem, there is no problem to use "select wm_nfsp.*" but as my concern is to prevent this in the future I think I should apply the fix or is there a config parameter to abend the backend if it reaches some kind of storage limit?

Thank you!

Reimer


On Tue, Nov 13, 2012 at 5:51 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Carlos Henrique Reimer <carlos.reimer@opendb.com.br> writes:
> That is what I got from gdb:

>       ExecutorState: 11586756656 total in 1391 blocks; 4938408 free (6
> chunks); 11581818248 used

So, query-lifespan memory leak.  After poking at this for a bit, I think
the problem has nothing to do with joins; more likely it's because you
are returning a composite column:

        select wm_nfsp from "5611_isarq".wm_nfsp ...

I found out that record_out() leaks sizable amounts of memory, which
won't be recovered till end of query.  You could work around that by
returning "select wm_nfsp.*" instead, but if you really want the result
in composite-column notation, I'd suggest applying this patch:
http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=c027d84c81d5e07e58cd25ea38805d6f1ae4dfcd

                        regards, tom lane



--
Reimer
47-3347-1724 47-9183-0547 msn: carlos.reimer@opendb.com.br

pgsql-general by date:

Previous
From: Igor Romanchenko
Date:
Subject: Re: Using window functions to get the unpaginated count for paginated queries
Next
From: "Albe Laurenz"
Date:
Subject: Re: File system level copy