Re: Optimize planner memory consumption for huge arrays - Mailing list pgsql-hackers

From Tom Lane
Subject Re: Optimize planner memory consumption for huge arrays
Date
Msg-id 4095836.1708357512@sss.pgh.pa.us
Whole thread Raw
In response to Re: Optimize planner memory consumption for huge arrays  (Tomas Vondra <tomas.vondra@enterprisedb.com>)
Responses Re: Optimize planner memory consumption for huge arrays
List pgsql-hackers
Tomas Vondra <tomas.vondra@enterprisedb.com> writes:
> Considering there are now multiple patches improving memory usage during
> planning with partitions, perhaps it's time to take a step back and
> think about how we manage (or rather not manage) memory during query
> planning, and see if we could improve that instead of an infinite
> sequence of ad hoc patches?

+1, I've been getting an itchy feeling about that too.  I don't have
any concrete proposals ATM, but I quite like your idea here:

> For example, I don't think we expect selectivity functions to allocate
> long-lived objects, right? So maybe we could run them in a dedicated
> memory context, and reset it aggressively (after each call).

That could eliminate a whole lot of potential leaks.  I'm not sure
though how much it moves the needle in terms of overall planner memory
consumption.  I've always supposed that the big problem was data
structures associated with rejected Paths, but I might be wrong.
Is there some simple way we could get a handle on where the most
memory goes while planning?

            regards, tom lane



pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: numeric_big in make check?
Next
From: Tomas Vondra
Date:
Subject: Re: JIT compilation per plan node