Hi,
On 2024-10-26 16:14:25 +0200, Alvaro Herrera wrote:
> > A fixed-size shared memory block, currently accommodating 30 records,
> > is used to store the statistics.
>
> Hmm, would it make sene to use dynamic shared memory for this?
+1
> The publishing backend could dsm_create one DSM chunk of the exact size that
> it needs, pass the dsm_handle to the consumer, and then have it be destroy
> once it's been read.
I'd probably just make it a dshash table or such, keyed by the pid, pointing
to a dsa allocation with the stats.
> That way you don't have to define an arbitrary limit
> of any size. (Maybe you could keep a limit to how much is published in
> shared memory and spill the rest to disk, but I think such a limit should be
> very high[1], so that it's unlikely to take effect in normal cases.)
>
> [1] This is very arbitrary of course, but 1 MB gives enough room for
> some 7000 contexts, which should cover normal cases.
Agreed. I can see a point in a limit for extreme cases, but spilling to disk
doesn't seem particularly useful.
Greetings,
Andres Freund