Re: Pre-allocation of shared memory ... - Mailing list pgsql-hackers

From Tom Lane
Subject Re: Pre-allocation of shared memory ...
Date
Msg-id 6734.1055428171@sss.pgh.pa.us
Whole thread Raw
In response to Re: Pre-allocation of shared memory ...  (Jon Lapham <lapham@extracta.com.br>)
Responses Re: Pre-allocation of shared memory ...  (Jon Lapham <lapham@extracta.com.br>)
Re: Pre-allocation of shared memory ...  (Bruce Momjian <pgman@candle.pha.pa.us>)
List pgsql-hackers
Jon Lapham <lapham@extracta.com.br> writes:
> Just curious.  What would a rationally designed OS do in an out of 
> memory situation?

Fail malloc() requests.

The sysctl docs that Andrew Dunstan just provided give some insight into
the problem: the default behavior of Linux is to promise more virtual
memory than it can actually deliver.  That is, it allows malloc to
succeed even when it's not going to be able to actually provide the
address space when push comes to shove.  When called to stand and
deliver, the kernel has no way to report failure (other than perhaps a
software-induced SIGSEGV, which would hardly be an improvement).  So it
kills the process instead.  Unfortunately, the process that happens to
be in the line of fire at this point could be any process, not only the
one that made unreasonable memory demands.

This is perhaps an okay behavior for desktop systems being run by
people who are accustomed to Microsoft-like reliability.  But to make it
the default is brain-dead, and to make it the only available behavior
(as seems to have been true until very recently) defies belief.  The
setting now called "paranoid overcommit" is IMHO the *only* acceptable
one for any sort of server system.  With anything else, you risk having
critical userspace daemons killed through no fault of their own.
        regards, tom lane


pgsql-hackers by date:

Previous
From: Dennis Björklund
Date:
Subject: Re: Alter strings that don't belong to the application
Next
From: Jon Lapham
Date:
Subject: Re: Pre-allocation of shared memory ...