Hi,
Here is a rebased and slightly commented Tom's patch separated from the
discussion [1].
I still occasionally see that planner peak memory consumption is
triggered by selectivity estimation of massive arrays and clauses
generated for multiple partitions.
I think that in case of a growing number of selectivity estimations, the
planner should consume memory in some scalable manner, so this thread is
an attempt to draw community attention to this issue.
I question some decisions in that patch. For example, field 'depth' and
even 'usage' can be incorporated inside the MemoryContext structure.
This can make the reset function safer. To avoid overhead, it may be
used only for the 'short-living' class of memory contexts, which can be
declared with an additional parameter of AllocSetContextCreate on a
context creation.
[1] https://www.postgresql.org/message-id/1367418.1708816059@sss.pgh.pa.us
--
regards, Andrei Lepikhov