Re: Creating a function for exposing memory usage of backend process - Mailing list pgsql-hackers

From Kasahara Tatsuhito
Subject Re: Creating a function for exposing memory usage of backend process
Date
Msg-id CAP0=ZV+bH1SAeCbBnOq95c-4SkonEijG_B8yJd4+j2PM1f6+cQ@mail.gmail.com
Whole thread Raw
In response to Re: Creating a function for exposing memory usage of backend process  (Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>)
Responses Re: Creating a function for exposing memory usage of backend process
List pgsql-hackers
Hi,

On Fri, Jun 26, 2020 at 3:42 PM Bharath Rupireddy
<bharath.rupireddyforpostgres@gmail.com> wrote:
> While going through the mail chain on relation, plan and catalogue
> caching [1], I'm thinking on the lines that is there a way to know the
> current relation, plan and catalogue cache sizes? If there is a way
> already,  please ignore this and it would be grateful if someone point
> me to that.
AFAIK the only way to get statistics on PostgreSQL's backend  internal
local memory usage is to use MemoryContextStats() via gdb to output
the information to the log, so far.

> If there is no such way to know the cache sizes and other info such as
> statistics, number of entries, cache misses, hits etc.  can the
> approach discussed here be applied?
I think it's partially yes.

> If the user knows the cache statistics and other information, may be
> we can allow user to take appropriate actions such as allowing him to
> delete few entries through a command or some other way.
Yeah, one of the purposes of the features we are discussing here is to
use them for such situation.

Regards,

-- 
Tatsuhito Kasahara
kasahara.tatsuhito _at_ gmail.com



pgsql-hackers by date:

Previous
From: Masahiko Sawada
Date:
Subject: Re: PG 13 release notes, first draft
Next
From: Amit Kapila
Date:
Subject: Re: Transactions involving multiple postgres foreign servers, take 2