On Mon, Dec 10, 2007 at 04:26:12PM +0100, Listaccount wrote:
> Hello
>
> I have been trapped by the advice from the manual to use "sysctl -w
> vm.overcommit_memory=2" when using Linux (see 16.4.3. Linux Memory
> Overcommit). This value should only be used when PostgreSQL is the
I think you need to read the documentation more carefully, because it
clearly suggests you (1) look at the kernel source and (2) consult a kernel
expert as part of your evaluation.
In any case,
> /proc/meminfo on a longer running system. If "Committed_AS" reaches or
> come close to "CommitLimit" one should not set overcommit_memory=2 (see
> http://www.redhat.com/archives/rhl-devel-list/2005-February/msg00738.html).
my own reading of that message leads me to the opposite conclusion as yours.
You should _for sure_ set overcommit_memory=2 in that case. And this is
why:
> this setting the machine in question may get trouble with "fork
> failed" even if the standard system tools report a lot of free memory
> causing confusion to the admins.
You _want_ the fork to fail when the kernel can't (over)commit the memory,
because otherwise the stupid genius kernel will come along and maybe blip
your postmaster on the head, causing it to die by surprise. Don't like
that? Use more memory. Or get an operating system that doesn't do stupid
things like promise more memory than it has.
Except, of course, those are getting rarer and rarer all the time.
Please note that memory overcommit is sort of like a high-risk mortgage: the
chances that the OS will recover enough memory in any given round start out
as high. Eventually, however, the [technical|financial] economy is such
that only high-risk commitments are available, and at that point, _someone_
isn't going to pay back enough [memory|money] to the thing demanding it. At
that point, it's anyone's guess what will happen next.
A