Re: configurability of OOM killer - Mailing list pgsql-hackers

From Dawid Kuroczko
Subject Re: configurability of OOM killer
Date
Msg-id 758d5e7f0802080059t69196689n3c2c2ee56d54c2fe@mail.gmail.com
Whole thread Raw
In response to Re: configurability of OOM killer  (Martijn van Oosterhout <kleptog@svana.org>)
Responses Re: configurability of OOM killer  (Martijn van Oosterhout <kleptog@svana.org>)
List pgsql-hackers
On Feb 7, 2008 11:59 PM, Martijn van Oosterhout <kleptog@svana.org> wrote:
> On Thu, Feb 07, 2008 at 08:22:42PM +0100, Dawid Kuroczko wrote:
> > Noooow, I know work_mem is not "total per process limit", but
> > rather per sort/hash/etc operation.  I know the scheme is a bit
> > sketchy, but I think this would allow more memory-greedy
> > operations to use memory, while taking in consideration that
> > they are not the only ones out there.  And that these settings
> > would be more like hints than the actual limits.
>
> Given that we don't even control memory usage within a single process
> that accuratly, it seems a bit difficult to do it across the board. You
> just don't know when you start a query how much memory you're going to
> use...

Of course.  My idea does nothing to guarantee memory usage control.
It is that backends a slightly more aware of their siblings when they
allocate memory.  There is nothing wrong with one backend taking
512MB of RAM for its use, when nobody else is needing it.  There is
something wrong with it taking 512MB of RAM when three others
already did the same.

Hmm, I guess it would be possible to emulate this with help of cron job
which would examine current PostgreSQL's memory consumption, calculate
the new "suggested work_mem", write it into postgres.conf and reload the
config file.  Ugly at best (and calculating total memory used would be a pain),
but could be used to test if this proposal has any merit at all.

> > ....while we are at it -- one feature would be great for 8.4, an
> > ability to shange shared buffers size "on the fly".  I expect
> > it is not trivial, but would help fine-tuning running database.
> > I think DBA would need to set maximum shared buffers size
> > along the normal setting.
>
> Shared memory segments can't be resized... There's not even a kernel
> API to do it.

That is true.  However it is possible to allocate more than one shared memory
segment.  At simplest I would assume that DBA should specify minimum
shared memory size (say, 1GB) and expected maximum (2GB).  And that
between minimum and maximum SHM should be allocated in reasonably
sized chunks.  Say 128MB chunks.  So that DBA could resize shared buffers
to 1.5GB, decide this was not a good idea after all and reduce it to 1280MB.
From the allocation point of view it would be: 1) one big chunk of 1GB 2) one 128MB chunk 3) another 128MB chunk 4)
128MBchunk declared dead -- new pages are prohibited, old pages are     there until every backend gets rid of them. 5)
128MBsame as 4.
 

I am not sure that chunk size should be constant -- but it should be something
reasonably small IF we want to be able to deallocate them.

Now, it would give DBA an ability to start with fail safe settings,
and gradually
increase share buffers without forcing a restart.  And ability to
(yes, it would be
a slow process) rollback ;-) from overallocating memory.
  Regards,     Dawid


pgsql-hackers by date:

Previous
From: Mark Cave-Ayland
Date:
Subject: Re: PostgreSQL 8.4 development plan
Next
From: Martijn van Oosterhout
Date:
Subject: Re: configurability of OOM killer