On Tue, Dec 11, 2007 at 03:08:36PM +0100, Listaccount wrote:
> I would have not been surprised if the OOM-Killer would go around in
> case of short memory but i was surprised to see fork failed with a
> system having 1GB Memory available.
You don't understand: the system _did not_ have 1G of memory available. It
was all committed to applications that had asked for it. Just because they
asked for it even though they were never going to use it doesn't mean that
it isn't gone. It's used, as far as the kernel is concerned. The
overcommit trick some OSes have implemented is a filthy hack to get around
poor memory allocation discipline in applications.
The point of the PostgreSQL documentation is to tell you how best to run
Postgres, safely and reliably. The only safe and reliable way to run on
Linux is not to use overcommit. Turning it off ensures that the system
can't run out of memory in this way.
What I _would_ support in the docs is the following addition in 17.4.3,
where this is discussed:
. . .it will lower the chances significantly and will therefore
lead to more robust system behavior. It may also cause fork() to fail
when the machine appears to have available memory. This is done by
selecting. . .
Or something like that. This would warn potential users that they really do
need to read their kernel docs.
A