Re: [HACKERS] file descriptors leak? - Mailing list pgsql-hackers

From Tom Lane
Subject Re: [HACKERS] file descriptors leak?
Date
Msg-id 11404.941555895@sss.pgh.pa.us
Whole thread Raw
In response to Re: [HACKERS] file descriptors leak?  ("Gene Sokolov" <hook@aktrad.ru>)
Responses Re: [HACKERS] file descriptors leak?  (Oleg Bartunov <oleg@sai.msu.su>)
List pgsql-hackers
"Gene Sokolov" <hook@aktrad.ru> writes:
> We disconnected all clients and the number of descriptors dropped from 800
> to about 200, which is reasonable. We currently have 3 connections and ~300
> used descriptors. The "lsof -u postgres" is attached.

Hmm, I see a postmaster with 8 open files and one backend with 34.
Doesn't look out of the ordinary to me.

> It seems ok except for a large number of open /dev/null.

I see /dev/null at the stdin/stdout/stderr positions, which I suppose
means that you started the postmaster with -S instead of directing its
output to a logfile.

It is true that on a system that'll let individual processes have as
many open file descriptors as they want, Postgres can soak up a lot.
Over time I'd expect each backend to acquire an FD for practically
every file in the database directory (including system tables and
indexes).  So in a large installation you could be looking at thousands
of open files.  But the situation you're describing doesn't seem like
it should reach those kinds of numbers.

The number of open files per backend can be constrained by fd.c, but
AFAIK there isn't any way to set a manually-specified upper limit; it's
all automatic.  Perhaps there should be a configuration option to add
a limit.
        regards, tom lane


pgsql-hackers by date:

Previous
From: Bruce Momjian
Date:
Subject: Re: [HACKERS] change in name of perl?
Next
From: Tom Lane
Date:
Subject: Re: AW: [HACKERS] sort on huge table