Re: Postgres Connections Requiring Large Amounts of Memory - Mailing list pgsql-performance

From Tom Lane
Subject Re: Postgres Connections Requiring Large Amounts of Memory
Date
Msg-id 14875.1055878682@sss.pgh.pa.us
Whole thread Raw
In response to Re: Postgres Connections Requiring Large Amounts of Memory  (Dawn Hollingsworth <dmh@airdefense.net>)
Responses Re: Postgres Connections Requiring Large Amounts of Memory
List pgsql-performance
Dawn Hollingsworth <dmh@airdefense.net> writes:
> I attached gdb to a connection using just over 400MB( according to top)
> and ran "MemoryContextStats(TopMemoryContext)"

Hmm.  This only seems to account for about 5 meg of space, which means
either that lots of space is being used and released, or that the leak
is coming from direct malloc calls rather than palloc.  I doubt the
latter though; we don't use too many direct malloc calls.

On the former theory, could it be something like updating a large
number of tuples in one transaction in a table with foreign keys?
The pending-triggers list could have swelled up and then gone away
again.

The large number of SPI Plan contexts seems a tad fishy, and even more
so the fact that some of them are rather large.  They still only account
for a couple of meg, so they aren't directly the problem, but perhaps
they are related to the problem.  I presume these came from either
foreign-key triggers or something you've written in PL functions.  Can
you tell us more about what you use in that line?

            regards, tom lane

pgsql-performance by date:

Previous
From: Dawn Hollingsworth
Date:
Subject: Re: Postgres Connections Requiring Large Amounts of Memory
Next
From: Dawn Hollingsworth
Date:
Subject: Re: Postgres Connections Requiring Large Amounts of Memory