On Fri, 2021-04-02 at 13:31 +0000, Campbell, Lance wrote:
> It feels like there needs to be work_mem and work_mem_stack_size. When work memory is
> needed a process “pops” a token off of a stack. When it is done processing it “puts”
> the token back on the stack. If the stack is empty then don’t allocate memory just
> write to disk for work_mem.
>
> This does two key things:
>
> 1) It allows for a real world understanding of how much memory is really needed on a
> day to day basis. You can track how often a stack is empty. You can also look at the
> number of temp files to see when work exceeds the work_mem allocation. There is no
> “art” to setting these values. You can use logical analysis to make choices.
>
> 2) This also prevents out of memory issues. You are protecting yourself from extreme loads.
If I get you right, you want another memory limit per session.
I see the point, but then we wouldn't need "work_mem" any more, right?
What is the point of limiting the memory per plan node if we have an
overall limit?
In practice, I have never had trouble with "work_mem". I usually follow
my rule of thumb: max_connections * work_mem + shared_buffers < RAM
While some backend may need more, many will need less. Only bitmaps, hashes
and sorts are memory hungry.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com