"Thomas F. O'Connell" <tfo@monsterlabs.com> writes:
> i'm running postgres 7.1.3 in a production environment. [snip]
> every now and then, traffic on the server, which is accessed publicly
> via mod_perl (Apache::DBI) causes the machine itself to hit the kernel
> hard limit of number of files open: 8191.
What OS is this?
You can reconfigure the kernel filetable larger in all Unixen that I
know of, but it's more painful in some than others. Unfortunately,
some systems' sysconf() reports a larger _SC_OPEN_MAX value than the
kernel can realistically support over a large number of processes.
> this, unfortunately, crashes the machine. in a production environment of
> this magnitude, is that a reasonable number of files to expect postgres
> to need at any given time? is there any documentation anywhere on what
> the number of open files depends on?
If left alone, Postgres could conceivably open every file in your
database in each backend process. There is a per-backend limit on
number of open files, but it's taken from the aforesaid sysconf()
result; if your kernel reports an overly large sysconf(_SC_OPEN_MAX)
then you *will* have trouble.
In 7.2 there is a config parameter max_files_per_process that can be
set to limit the per-backend file usage to something less than what
sysconf claims. This does not exist in 7.1, but you could hack up
pg_nofile() in src/backend/storage/file/fd.c to enforce a suitable
limit.
In any case you probably don't want to set the per-backend limit much
less than maybe 40-50 files. If that times the allowed number of
backends is more than, or even real close to, your kernel filetable
size, you'd best increase the filetable size.
regards, tom lane