Re: Enhancing Memory Context Statistics Reporting - Mailing list pgsql-hackers

From Alvaro Herrera
Subject Re: Enhancing Memory Context Statistics Reporting
Date
Msg-id 202411141148.vqxmwtn2ln25@alvherre.pgsql
Whole thread Raw
List pgsql-hackers
On 2024-Nov-14, Michael Paquier wrote:

> Already mentioned previously at [1] and echoing with some surrounding
> arguments, but I'd suggest to keep it simple and just remove entirely
> the part of the patch where the stats information gets spilled into
> disk.  With more than 6000-ish context information available with a
> hard limit in place, there should be plenty enough to know what's
> going on anyway.

Functionally-wise I don't necessarily agree with _removing_ the spill
code, considering that production systems with thousands of tables would
easily reach that number of contexts (each index gets its own index info
context, each regexp gets its own memcxt); and I don't think silently
omitting a fraction of people's memory situation (or erroring out if the
case is hit) is going to make us any friends.

That said, it worries me that we choose a shared memory size so large
that it becomes impractical to hit the spill-to-disk code in regression
testing.  Maybe we can choose a much smaller limit size when
USE_ASSERT_CHECKING is enabled, and use a test that hits that number?
That way, we know the code is being hit and tested, without imposing a
huge memory consumption on test machines.

-- 
Álvaro Herrera               48°01'N 7°57'E  —  https://www.EnterpriseDB.com/
"Tiene valor aquel que admite que es un cobarde" (Fernandel)



pgsql-hackers by date:

Previous
From: Jim Vanns
Date:
Subject: Re: BitmapOr node not used in plan for ANY/IN but is for sequence of ORs ...
Next
From: Bertrand Drouvot
Date:
Subject: Re: define pg_structiszero(addr, s, r)