Re: Postgres consuming way too much memory??? - Mailing list pgsql-performance

From Tom Lane
Subject Re: Postgres consuming way too much memory???
Date
Msg-id 28419.1150386248@sss.pgh.pa.us
Whole thread Raw
In response to Re: Postgres consuming way too much memory???  ("jody brownell" <jody.brownell@q1labs.com>)
Responses Re: Postgres consuming way too much memory???
Re: Postgres consuming way too much memory???
List pgsql-performance
"jody brownell" <jody.brownell@Q1Labs.com> writes:
>     BEGIN
>           INSERT into attacker_target_link (attacker_id, target_id) values (p_attacker, v_target);
>           v_returns_size := v_returns_size + 1;
>           v_returns[v_returns_size] := v_target;

>     EXCEPTION WHEN unique_violation THEN
>         -- do nothing... app cache may be out of date.
>     END;

Hmm.  There is a known problem that plpgsql leaks some memory when
catching an exception:
http://archives.postgresql.org/pgsql-hackers/2006-02/msg00885.php

So if your problem case involves a whole lot of duplicates then that
could explain the initial bloat.  However, AFAIK that leakage is in
a transaction-local memory context, so the space ought to be freed at
transaction end.  And Linux's malloc does know about giving space back
to the kernel (unlike some platforms).  So I'm not sure why you're
seeing persistent bloat.

Can you rewrite the function to not use an EXCEPTION block (perhaps
a separate SELECT probe for each row --- note this won't be reliable
if there are concurrent processes making insertions)?  If so, does
that fix the problem?

            regards, tom lane

pgsql-performance by date:

Previous
From: Tom Lane
Date:
Subject: Re: Postgres consuming way too much memory???
Next
From: "Jim C. Nasby"
Date:
Subject: Re: Confirmation of bad query plan generated by 7.4