Re: Hitting the nfile limit - Mailing list pgsql-hackers

From Tom Lane
Subject Re: Hitting the nfile limit
Date
Msg-id 2455.1057341741@sss.pgh.pa.us
Whole thread Raw
In response to Hitting the nfile limit  (Michael Brusser <michael@synchronicity.com>)
Responses Re: Hitting the nfile limit  (Michael Brusser <michael@synchronicity.com>)
List pgsql-hackers
Michael Brusser <michael@synchronicity.com> writes:
> Apparently we managed to run out of the open file descriptors on the host
> machine.

This is pretty common if you set a large max_connections value while
not doing anything to raise the kernel nfile limit.  Postgres will
follow what the kernel tells it is a safe number of open files per
process, but far too many kernels lie through their teeth about what
they can support :-(

You can reduce max_files_per_process in postgresql.conf to keep Postgres
from believing what the kernel says.  I'd recommend making sure that
max_connections * max_files_per_process is comfortably less than the
kernel nfiles setting (don't forget the rest of the system wants to have
some files open too ;-))

> I wonder how Postgres handles this situation.
> (Or power outage, or any hard system fault, at this point)

Theoretically we should be able to recover from this without loss of
committed data (assuming you were running with fsync on).  Is your QA
person certain that the record in question had been written by a
successfully-committed transaction?
        regards, tom lane


pgsql-hackers by date:

Previous
From: Vincent van Leeuwen
Date:
Subject: pg_autovacuum bug and feature request
Next
From: Joe Conway
Date:
Subject: Re: Compile error in current cvs (~1230 CDT July 4)