Jon Lapham <lapham@extracta.com.br> writes:
> Just curious. What would a rationally designed OS do in an out of
> memory situation?
Fail malloc() requests.
The sysctl docs that Andrew Dunstan just provided give some insight into
the problem: the default behavior of Linux is to promise more virtual
memory than it can actually deliver. That is, it allows malloc to
succeed even when it's not going to be able to actually provide the
address space when push comes to shove. When called to stand and
deliver, the kernel has no way to report failure (other than perhaps a
software-induced SIGSEGV, which would hardly be an improvement). So it
kills the process instead. Unfortunately, the process that happens to
be in the line of fire at this point could be any process, not only the
one that made unreasonable memory demands.
This is perhaps an okay behavior for desktop systems being run by
people who are accustomed to Microsoft-like reliability. But to make it
the default is brain-dead, and to make it the only available behavior
(as seems to have been true until very recently) defies belief. The
setting now called "paranoid overcommit" is IMHO the *only* acceptable
one for any sort of server system. With anything else, you risk having
critical userspace daemons killed through no fault of their own.
regards, tom lane