Re: Re: Too many open files (was Re: spinlock problems reported earlier) - Mailing list pgsql-hackers

From Tom Lane
Subject Re: Re: Too many open files (was Re: spinlock problems reported earlier)
Date
Msg-id 16545.967483978@sss.pgh.pa.us
Whole thread Raw
In response to Re: Re: Too many open files (was Re: spinlock problems reported earlier)  (Brook Milligan <brook@biology.nmsu.edu>)
Responses Re: Re: Too many open files (was Re: spinlock problems reported earlier)
List pgsql-hackers
Brook Milligan <brook@biology.nmsu.edu> writes:
> In any case, if this really follows the POSIX standard, perhaps
> PostgreSQL code should assume these semantics and work around other
> cases that don't follow the standard (instead of work around the POSIX
> cases).

HP asserts that *they* follow the POSIX standard, and in this case
I'm more inclined to believe them than the *BSD camp.  A per-process
limit on open files has existed in most Unices I've heard of; I had
never heard of a per-userid limit until yesterday.  (And I'm not yet
convinced that that's actually what *BSD implements; are we sure it's
not just a typo in the man page?)

64 or so for _SC_OPEN_MAX is not really what I'm worried about anyway.
IIRC, we've heard reports that some platforms return values in the
thousands, ie, essentially telling each process it can have the whole
kernel FD table, and it's that behavior that I'm speculating is causing
Marc's problem.

Marc, could you check what is returned by sysconf(_SC_OPEN_MAX) on your
box?  And/or check to see how many files each backend is actually
holding open?
        regards, tom lane


pgsql-hackers by date:

Previous
From: The Hermit Hacker
Date:
Subject: Re: too many clients
Next
From: Tom Lane
Date:
Subject: Re: SQL COPY syntax extension (was: Performance on inserts)