Re: Enhancing Memory Context Statistics Reporting - Mailing list pgsql-hackers

From Andres Freund
Subject Re: Enhancing Memory Context Statistics Reporting
Date
Msg-id hi23wbergcrdxzvoibpmiu3vpgkn7pop5mn4zqepfoah3h3w4j@hiltn5pw4f3r
Whole thread Raw
In response to Re: Enhancing Memory Context Statistics Reporting  (Alvaro Herrera <alvherre@alvh.no-ip.org>)
List pgsql-hackers
Hi,

On 2024-10-26 16:14:25 +0200, Alvaro Herrera wrote:
> > A fixed-size shared memory block, currently accommodating 30 records,
> > is used to store the statistics.
> 
> Hmm, would it make sene to use dynamic shared memory for this?

+1


> The publishing backend could dsm_create one DSM chunk of the exact size that
> it needs, pass the dsm_handle to the consumer, and then have it be destroy
> once it's been read.

I'd probably just make it a dshash table or such, keyed by the pid, pointing
to a dsa allocation with the stats.


> That way you don't have to define an arbitrary limit
> of any size.  (Maybe you could keep a limit to how much is published in
> shared memory and spill the rest to disk, but I think such a limit should be
> very high[1], so that it's unlikely to take effect in normal cases.)
> 
> [1] This is very arbitrary of course, but 1 MB gives enough room for
> some 7000 contexts, which should cover normal cases.

Agreed. I can see a point in a limit for extreme cases, but spilling to disk
doesn't seem particularly useful.

Greetings,

Andres Freund



pgsql-hackers by date:

Previous
From: Alvaro Herrera
Date:
Subject: Re: make all ereport in gram.y print out relative location
Next
From: Daniel Gustafsson
Date:
Subject: Re: doc issues in event-trigger-matrix.html