Backend memory dump analysis - Mailing list pgsql-hackers

From Vladimir Sitnikov
Subject Backend memory dump analysis
Date
Msg-id CAB=Je-FdtmFZ9y9REHD7VsSrnCkiBhsA4mdsLKSPauwXtQBeNA@mail.gmail.com
Whole thread Raw
Responses Re: Backend memory dump analysis  (Andres Freund <andres@anarazel.de>)
Re: Backend memory dump analysis  (Teodor Sigaev <teodor@sigaev.ru>)
List pgsql-hackers
Hi,

I investigate an out of memory-related case for PostgreSQL 9.6.5, and it looks like MemoryContextStatsDetail + gdb are the only friends there.

MemoryContextStatsDetail does print some info, however it is rarely possible to associate the used memory with business cases.
For insance:
   CachedPlanSource: 146224 total in 8 blocks; 59768 free (3 chunks); 86456 used
      CachedPlanQuery: 130048 total in 7 blocks; 29952 free (2 chunks); 100096 used

It does look like a 182KiB has been spent for some SQL, however there's no clear way to tell which SQL is to blame.

Another case: PL/pgSQL function context: 57344 total in 3 blocks; 17200 free (2 chunks); 40144 used
It is not clear what is there inside, which "cached plans" are referenced by that pgsql context (if any), etc.

It would be great if there was an ability to dump the memory in a machine-readable format (e.g. Java's HPROF).

Eclipse Memory Analyzer (https://www.eclipse.org/mat/) can visualize Java memory dumps quite well, and I think HPROF format is trivial to generate (the generation is easy, the hard part is to parse memory contents).
That is we could get analysis UI for free if PostgreSQL produces the dump.

Is it something welcome or non-welcome?
Is it something worth including in-core?

Vladimir

pgsql-hackers by date:

Previous
From: Konstantin Knizhnik
Date:
Subject: Re: [HACKERS] Surjective functional indexes
Next
From: David Steele
Date:
Subject: Re: PATCH: Exclude unlogged tables from base backups