On Wed, Mar 9, 2016 at 11:00 AM, Scott Marlowe <scott.marlowe@gmail.com> wrote:
> Note that running with overcommit_memory = 0 if you do start to run
> out of memory, the oom killer will often kill the postmaster. This is
> bad. If this happens make sure to kill all postgres children before
> trying to restart the db, as starting a new postmaster with children
> still running will instantly and permanently corrupt / destroy your
> db.
And even if it kills one of the "normal" backends (perhaps even the
one responsible for some excessive allocations) it causes a PANIC,
which is a crash and restart of the entire database service. It
will usually do this on a reference to memory which appeared to be
successfully allocated, which can be fairly confusing. With
overcommit_memory = 2 you usually get just a FATAL error (loss of
connection) on the one connection doing the memory allocation which
puts things over the top, with a dump of space used by memory
contexts in the log.
> The eventual state you want is to be able to run with overcommit = 2
> and settings that prevent out of memory allocations.
Agreed.
> Note too that if you never seem to actually run out of memory, but get
> allocation errors, it can also be a lack of file handles (I think
> that's what caused it in the past for me. Been a while) Point being
> that you can get a failure to allocate memory when there's plenty of
> memory due to other settings on your server.
Yes, I have seen that, too.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company