Re: configurability of OOM killer - Mailing list pgsql-hackers

From Dawid Kuroczko
Subject Re: configurability of OOM killer
Date
Msg-id 758d5e7f0802071122w6c5cd873l29ffe72d125cec8e@mail.gmail.com
Whole thread Raw
In response to Re: configurability of OOM killer  (Ron Mayer <rm_pg@cheapcomplexdevices.com>)
Responses Re: configurability of OOM killer
Re: configurability of OOM killer
List pgsql-hackers
On Feb 5, 2008 10:54 PM, Ron Mayer <rm_pg@cheapcomplexdevices.com> wrote:
> Decibel! wrote:
> >
> > Yes, this problem goes way beyond OOM. Just try and configure
> > work_memory aggressively on a server that might see 50 database
> > connections, and do it in such a way that you won't swap. Good luck.
>
> That sounds like an even broader and more difficult problem
> than managing memory.
>
> If you have 50 connections that all want to perform large sorts,
> what do you want to have happen?
>
>   a) they each do their sorts in parallel with small amounts
>      of memory for each; probably all spilling to disk?
>   b) they each get a big chunk of memory but some have to
>      wait for each other?
>   c) something else?

Something else. :-)

I think there could be some additional parameter which would
control how much memory there is in total, say: process_work_mem = 128MB # Some other name needed...
process_work_mem_percent= 20% # Yeah, defenately some other name... total_work_mem = 1024MB # how much there is for you
intotal.
 


Your postgres spawns 50 processes which initially don't
use much work_mem.  They would all register their current
work_mem usage, in shared memory.

Each process, when it expects largish sort tries to determine
how much memory there is for the taking, to calculate is own
work_mem.  work_mem should not exceed process_work_mem,
and not exceed 20% of total available free mem.

So, one backend needs to make a huge sort.  Determines the
limit for it is 128MB and allocates it.

Another backend starts sorting.  Deletermines the current free
mem is about (1024-128)*20% =~ 179MB.  Takes 128MB

Some time passes, 700MB of total_work_mem is used, and
another backend decides it needs much memory.
It determines its current free mem to be not more than
(1024-700) * 20% =~ 64MB, so it sets it work_mem to 64MB
and sorts away.

Noooow, I know work_mem is not "total per process limit", but
rather per sort/hash/etc operation.  I know the scheme is a bit
sketchy, but I think this would allow more memory-greedy
operations to use memory, while taking in consideration that
they are not the only ones out there.  And that these settings
would be more like hints than the actual limits.


....while we are at it -- one feature would be great for 8.4, an
ability to shange shared buffers size "on the fly".  I expect
it is not trivial, but would help fine-tuning running database.
I think DBA would need to set maximum shared buffers size
along the normal setting.

Regards,  Dawid


pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: PostgreSQL 8.4 development plan
Next
From: Dimitri Fontaine
Date:
Subject: Re: {**Spam**} Re: PostgreSQL 8.4 development plan