Re: Optimize planner memory consumption for huge arrays - Mailing list pgsql-hackers

From Andrei Lepikhov
Subject Re: Optimize planner memory consumption for huge arrays
Date
Msg-id 3f8fde0c-b4aa-4e36-9113-604ef6e20cb2@postgrespro.ru
Whole thread Raw
In response to Re: Optimize planner memory consumption for huge arrays  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: Optimize planner memory consumption for huge arrays
List pgsql-hackers
On 20/2/2024 04:51, Tom Lane wrote:
> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:
>> On 2/19/24 16:45, Tom Lane wrote:
>>> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:
>>>> For example, I don't think we expect selectivity functions to allocate
>>>> long-lived objects, right? So maybe we could run them in a dedicated
>>>> memory context, and reset it aggressively (after each call).
> Here's a quick and probably-incomplete implementation of that idea.
> I've not tried to study its effects on memory consumption, just made
> sure it passes check-world.
Thanks for the sketch. The trick with the planner_tmp_cxt_depth 
especially looks interesting.
I think we should design small memory contexts - one per scalable 
direction of memory utilization, like selectivity or partitioning 
(appending ?).
My coding experience shows that short-lived GEQO memory context forces 
people to learn on Postgres internals more precisely :).

-- 
regards,
Andrei Lepikhov
Postgres Professional




pgsql-hackers by date:

Previous
From: Andrew Dunstan
Date:
Subject: Re: WIP Incremental JSON Parser
Next
From: Robert Haas
Date:
Subject: Re: RFC: Logging plan of the running query