Memory settings, vm.overcommit, how to get it really safe? - Mailing list pgsql-general

From Hannes Dorbath
Subject Memory settings, vm.overcommit, how to get it really safe?
Date
Msg-id f2hmcr$18lg$1@news.hub.org
Whole thread Raw
Responses Re: Memory settings, vm.overcommit, how to get it really safe?
Re: Memory settings, vm.overcommit, how to get it really safe?
List pgsql-general
As probably many people I've been running PostgreSQL on Linux with
default overcommit settings and a lot of swap space for safety, though
disabling overcommit is recommend according to recent documentation.

PG's memory usage is not exactly predictable, for settings like work_mem
I always monitored production load and tried to find a safe compromise,
so that the box under typical load would never go into swap and on the
other hand users don't need to raise it too often just to get a few OLAP
queries perform OK.

What I'm trying now is to get a safe configuration for
vm.overcommit_memory = 2 and if possible run with much less or no swap
space.

On a clone box I disabled overcommit, lowered PG's memory settings a
bit, disabled swap, mirrored production load to it and monitored it how
it would behave. As I more or less expected, it got into trouble after
about 6 hours. All memory was exhausted, it was even unable to fork bash
again. To my surprise I haven't found any evidence of OOM going active
in the logs.

I blamed this behaviour to the swap space I've taken away, and not to
disabling overcommit. However I just enabled overcommit again and tried
to reproduce the behaviour. I was unable to get it into trouble again,
even with artificial high load.

Now I have a few questions:

1.) Why does it behave different when only changing overcommit? To my
understanding it should have run out of memory in both cases, or can PG
benefit from enabled overcommit? It's a minimal setup with PG being the
only one using any noticeable amount of resources.

2.) Is it possible at all to put a cap on the memory PG uses in total
from the OS side? kernel.shmmax, etc only limit some type of how PG
might use memory? Of cause excluding OS/FS buffers etc.

3.) Can PG be made to use it's own temp files when it runs out of memory
without setting memory settings so low that performance for typical load
will be worse? I think it would be nice, if I wouldn't need s lot of
swap, just to be safe under any load. Shouldn't that be more efficient
than using paged out memory anyway?


Currently it seems to me that I have to sacrifice the performance of
typical load, when disabling overcommit and / or reducing swap, as I
have to push PG's memory settings lower to be safe.

What might make my case a little bit more predictable is that the number
of backend processes / concurrent connections is fixed to 32. There will
never be more or less.


Thanks for any guidance / clarification.


--
Best regards,
Hannes Dorbath

pgsql-general by date:

Previous
From: Kenneth Downs
Date:
Subject: Paypal and "going root"
Next
From: Andrei Kovalevski
Date:
Subject: Re: dns less connection