Thread: Re: Enhancing Memory Context Statistics Reporting

Re: Enhancing Memory Context Statistics Reporting

From
Alvaro Herrera
Date:
On 2024-Oct-21, Rahila Syed wrote:

> I propose enhancing memory context statistics reporting by combining
> these capabilities and offering a view of memory statistics for all
> PostgreSQL backends and auxiliary processes.

Sounds good.

> A fixed-size shared memory block, currently accommodating 30 records,
> is used to store the statistics.

Hmm, would it make sene to use dynamic shared memory for this?  The
publishing backend could dsm_create one DSM chunk of the exact size that
it needs, pass the dsm_handle to the consumer, and then have it be
destroy once it's been read.  That way you don't have to define an
arbitrary limit of any size.  (Maybe you could keep a limit to how much
is published in shared memory and spill the rest to disk, but I think
such a limit should be very high[1], so that it's unlikely to take
effect in normal cases.)

[1] This is very arbitrary of course, but 1 MB gives enough room for
some 7000 contexts, which should cover normal cases.

-- 
Álvaro Herrera         PostgreSQL Developer  —  https://www.EnterpriseDB.com/
"Find a bug in a program, and fix it, and the program will work today.
Show the program how to find and fix a bug, and the program
will work forever" (Oliver Silfridge)



Re: Enhancing Memory Context Statistics Reporting

From
Andres Freund
Date:
Hi,

On 2024-10-26 16:14:25 +0200, Alvaro Herrera wrote:
> > A fixed-size shared memory block, currently accommodating 30 records,
> > is used to store the statistics.
> 
> Hmm, would it make sene to use dynamic shared memory for this?

+1


> The publishing backend could dsm_create one DSM chunk of the exact size that
> it needs, pass the dsm_handle to the consumer, and then have it be destroy
> once it's been read.

I'd probably just make it a dshash table or such, keyed by the pid, pointing
to a dsa allocation with the stats.


> That way you don't have to define an arbitrary limit
> of any size.  (Maybe you could keep a limit to how much is published in
> shared memory and spill the rest to disk, but I think such a limit should be
> very high[1], so that it's unlikely to take effect in normal cases.)
> 
> [1] This is very arbitrary of course, but 1 MB gives enough room for
> some 7000 contexts, which should cover normal cases.

Agreed. I can see a point in a limit for extreme cases, but spilling to disk
doesn't seem particularly useful.

Greetings,

Andres Freund