Re: "Too many open files in system" error - Mailing list pgsql-admin

From Jonatan Evald Buus
Subject Re: "Too many open files in system" error
Date
Msg-id 113ce31b0908122258x1c9401c0u103375843313e7cb@mail.gmail.com
Whole thread Raw
In response to Re: "Too many open files in system" error  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: "Too many open files in system" error  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-admin
Cheers for the insight Tom,
We generally have anywhere between 60 and 100 active connections to Postgres under normal load, but at peak times this may increase hence the max_connections = 256.
There're several databases in the Postgres cluster so an estimate would be approximately 200 tables in totals.
At least 2 of the databases contains 50 - 100 tables (most of the tables are static) each which are located in 5 - 10 schemas within each database.
Is this enough to gauge the expected number of open files?
If not, what additional information would be required?

Also, what would a reasonable setting for "max_files_per_process" based on a machine with 2GB RAM running FreeBSD 7.1 be?
The comments mention that "max_files_per_process may" be set as low as 25 but what would the implications of this restriction be?
Based on your suggestion, 50.000 / 256 = 191, so setting "max_files_per_process" to around 200 seems reasonable in addition to increasing the kern.maxfiles limit?

By the way, we've only been able to find the following log entries by the kernel which appears related to the problem:
Aug  8 06:30:50 node5 kernel: kern.maxfiles limit exceeded by uid 70, please see tuning(7).
Aug  8 06:30:57 node5 kernel: kern.maxfiles limit exceeded by uid 70, please see tuning(7).
Aug  8 06:32:57 node5 last message repeated 5 times
Aug  8 06:36:57 node5 last message repeated 9 times
"uid 70" is the Postgres user but Postgres logged the "too many files open" error several times in the days following the 8th.
Also the system never appeared to be unresponsive at any given time while the errors occurred, which appears to be a common problem based on previous discussions for this type of event.
We've previously run stress tests on the clusters where the database server reached a very very high load (30 or so) without any errors being logged (by PostGres or otherwise) thus it's quite curious that this problem would "suddenly" appear.

Appreciate the input

Cheers
Jona

On Thu, Aug 13, 2009 at 1:25 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Jonatan Evald Buus <jonatan.buus@cellpointmobile.com> writes:
> Is it normal for PostGreSQL to have close to 5000 file handles open while
> running?

It can be, if you have enough active backends and enough tables that
they are touching.  You have not provided nearly enough information to
gauge what the expected number of actual open files might be in your
installation, but I'll just point out that at max_connections = 256 and
max_files_per_process = 500 you have no grounds to complain if Postgres
tries to use 128000 open files.  If that value of max_connections
actually is representative of what you need to do, I'd recommend cutting
max_files_per_process to a couple hundred and upping the kernel limit
to somewhere north of 50000.

                       regards, tom lane

pgsql-admin by date:

Previous
From: Rob Newton
Date:
Subject: Re: XID wraparound in 8.4
Next
From: Gábor SZŰCS
Date:
Subject: Re: Replication for R/W offline use