On Tuesday March 14 2006 10:46 am, Tom Lane wrote:
> "Ed L." <pgsql@bluepolka.net> writes:
> > We have 3 clusters with 24K, 34K, and 47K open files
> > according to lsof. These same clusters have 164, 179, and
> > 210 active connections, respectively. Their schemas,
> > counting the number of user and system entries in pg_class
> > as a generously rough measure of potential open files,
> > contain roughly 2000 entries each. Those open files seem
> > pretty plausible, they're just much higher than what we see
> > on the older systems.
>
> Hm. AFAICT from the CVS logs, 7.4.2 and later should have
> about the same behavior as 8.1.* in this regard. What version
> is the older installation exactly?
They are machines each with a mix of 7.3.4, 7.4.6, and 7.4.8.
I'm working on lsof comparison to find specific diffs. It would
seem the factors driving number of open files are current
connections, # of relations, indices, etc. Am I correct about
that?
> You can always reduce max_files_per_process if you want more
> conservative behavior.
Ah, thanks. I'm not particularly worried about this since the
numbers on the new system somewhat make sense to me. But others
here are concerned, so I'm trying to explain/justify/understand
better. If we want to handle 16 clusters on this one box, each
with 300 max_connections and 2000 relations, would it be
ball-park reasonable to say that worst case we might have 300
backends with ~2000 open file descriptors each (300 * 2000 =
600K open files per cluster, 600K * 16 clusters = 10M open
files)? Increasing the kernel parameter 'nfiles' (max total
open files on system) to something like 10M seems to make some
of the ITRC HP gurus gasp. (I suspect we'll hit I/O limits long
before open files become an issue.)
Ed