On Fri, Jun 26, 2020 at 12:02:10AM -0400, Bruce Momjian wrote:
>On Fri, Jun 26, 2020 at 01:53:57AM +0200, Tomas Vondra wrote:
>> I'm not saying it's not beneficial to use different limits for different
>> nodes. Some nodes are less sensitive to the size (e.g. sorting often
>> gets faster with smaller work_mem). But I think we should instead have a
>> per-session limit, and the planner should "distribute" the memory to
>> different nodes. It's a hard problem, of course.
>
>Yeah, I am actually confused why we haven't developed a global memory
>allocation strategy and continue to use per-session work_mem.
>
I think it's pretty hard problem, actually. One of the reasons is that
the costing of a node depends on the amount of memory available to the
node, but as we're building the plan bottom-up, we have no information
about the nodes above us. So we don't know if there are operations that
will need memory, how sensitive they are, etc.
And so far the per-node limit served us pretty well, I think. So I'm not
very confused we don't have the per-session limit yet, TBH.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services