Re: large number of files open... - Mailing list pgsql-general

From Steve Wolfe
Subject Re: large number of files open...
Date
Msg-id 001201c19ed9$eee28900$d281f6cc@iboats.com
Whole thread Raw
In response to large number of files open...  ("Thomas F. O'Connell" <tfo@monsterlabs.com>)
List pgsql-general
> i'm running postgres 7.1.3 in a production environment. the database
> itself contains on the order of 100 tables, including some complex
> triggers, functions, and views. a few tables (on the order of 10) that
> are frequently accessed have on the order of 100,000 rows.
>
> every now and then, traffic on the server, which is accessed publicly
> via mod_perl (Apache::DBI) causes the machine itself to hit the kernel
> hard limit of number of files open: 8191.
>
> this, unfortunately, crashes the machine. in a production environment of
> this magnitude, is that a reasonable number of files to expect postgres
> to need at any given time? is there any documentation anywhere on what
> the number of open files depends on?

  My first recommendation would be to run Postgres on a seperate machine
if it's being hit that hard, but hey, maybe you just don't feel like it.
; )

  Our web servers handle a very large number of virtuals domains, so they
open up a *lot* of log files, and have (at times) hit the same problem
you're running into.  It used to be necessary to recompile the kernel to
raise the limits, but that ain't so any more, luckily.  With 2.4 kernels,
you can do something like this:

echo '16384' > /proc/sys/fs/file-max
echo '65536' > /proc/sys/fs/inode-max

or, in /etc/sysctl.conf,

fs.file-max = 16384
fs.inode-max = 65536

then, /sbin/sysctl -p

  Remember that inode-max needs to be at least twice file-max, and if I
recall, at least three times higher is recommended.

steve



pgsql-general by date:

Previous
From: Neil Conway
Date:
Subject: Re: large number of files open...
Next
From: Joseph Shraibman
Date:
Subject: Re: large number of files open...