Thread: Too many open files

Too many open files

From
Darin Fisher
Date:
I am running PosgreSQL 7.1 on Redhat 6.2 Kernel 2.4.6.

Under a pretty heavy load:
    1000 Transactions per second
    32 Open connections

Everything restarts because of too many open files.
I have increase my max number of open files to 16384 but this
just delays the inevitable.

I have tested the same scenario under Solaris 8 and it works
fine.

Is there anything I can do about this?

Darin

Re: Too many open files

From
Tom Lane
Date:
Darin Fisher <darinf@pfm.net> writes:
> I am running PosgreSQL 7.1 on Redhat 6.2 Kernel 2.4.6.
> Under a pretty heavy load:
>     1000 Transactions per second
>     32 Open connections

> Everything restarts because of too many open files.
> I have increase my max number of open files to 16384 but this
> just delays the inevitable.

> I have tested the same scenario under Solaris 8 and it works
> fine.

Linux (and BSD) have a tendency to promise more than they can deliver
about how many files an individual process can open.  Look at
pg_nofile() in src/backend/storage/file/fd.c --- it believes whatever
sysconf(_SC_OPEN_MAX) tells it, and on these OSes the answer is likely
to be several thousand.  Which the OS can indeed support when *one*
backend does it, but not when dozens of 'em do it.

I have previously suggested that we should have a configurable upper
limit for the number-of-openable-files that we will believe --- probably
a GUC variable with a default value of, say, a couple hundred.  No one's
gotten around to doing it, but if you'd care to submit a patch...

As a quick hack, you could just insert a hardcoded limit in
pg_nofile().

            regards, tom lane

Re: Too many open files

From
Oleg Bartunov
Date:
From my /etc/rc.d/rc.local:

# increase RCVBUF to optimize proxy<->backend
echo 131072 > /proc/sys/net/core/rmem_max
# increase maximum opened files
echo 8192 > /proc/sys/fs/file-max
# increase shared memory
echo "100000000" > /proc/sys/kernel/shmmax


    Regards,

        Oleg

On Wed, 1 Aug 2001, Tom Lane wrote:

> Darin Fisher <darinf@pfm.net> writes:
> > I am running PosgreSQL 7.1 on Redhat 6.2 Kernel 2.4.6.
> > Under a pretty heavy load:
> >     1000 Transactions per second
> >     32 Open connections
>
> > Everything restarts because of too many open files.
> > I have increase my max number of open files to 16384 but this
> > just delays the inevitable.
>
> > I have tested the same scenario under Solaris 8 and it works
> > fine.
>
> Linux (and BSD) have a tendency to promise more than they can deliver
> about how many files an individual process can open.  Look at
> pg_nofile() in src/backend/storage/file/fd.c --- it believes whatever
> sysconf(_SC_OPEN_MAX) tells it, and on these OSes the answer is likely
> to be several thousand.  Which the OS can indeed support when *one*
> backend does it, but not when dozens of 'em do it.
>
> I have previously suggested that we should have a configurable upper
> limit for the number-of-openable-files that we will believe --- probably
> a GUC variable with a default value of, say, a couple hundred.  No one's
> gotten around to doing it, but if you'd care to submit a patch...
>
> As a quick hack, you could just insert a hardcoded limit in
> pg_nofile().
>
>             regards, tom lane
>
> ---------------------------(end of broadcast)---------------------------
> TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to majordomo@postgresql.org so that your
> message can get through to the mailing list cleanly
>

    Regards,
        Oleg
_____________________________________________________________
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83

Re: Too many open files

From
Darin Fisher
Date:
Thanks, so far that looks like it is helping.
Only time will tell :)
I take it, that the pg_nofile is the max number of file to open per postgres
session?

Darin

Tom Lane wrote:

> Darin Fisher <darinf@pfm.net> writes:
> > I am running PosgreSQL 7.1 on Redhat 6.2 Kernel 2.4.6.
> > Under a pretty heavy load:
> >       1000 Transactions per second
> >       32 Open connections
>
> > Everything restarts because of too many open files.
> > I have increase my max number of open files to 16384 but this
> > just delays the inevitable.
>
> > I have tested the same scenario under Solaris 8 and it works
> > fine.
>
> Linux (and BSD) have a tendency to promise more than they can deliver
> about how many files an individual process can open.  Look at
> pg_nofile() in src/backend/storage/file/fd.c --- it believes whatever
> sysconf(_SC_OPEN_MAX) tells it, and on these OSes the answer is likely
> to be several thousand.  Which the OS can indeed support when *one*
> backend does it, but not when dozens of 'em do it.
>
> I have previously suggested that we should have a configurable upper
> limit for the number-of-openable-files that we will believe --- probably
> a GUC variable with a default value of, say, a couple hundred.  No one's
> gotten around to doing it, but if you'd care to submit a patch...
>
> As a quick hack, you could just insert a hardcoded limit in
> pg_nofile().
>
>                         regards, tom lane
>
> ---------------------------(end of broadcast)---------------------------
> TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to majordomo@postgresql.org so that your
> message can get through to the mailing list cleanly

Re: Too many open files

From
Tom Lane
Date:
> I take it, that the pg_nofile is the max number of file to open per postgres
> session?

Right, it's per backend.

            regards, tom lane