On 2013-08-06 19:19:41 +0900, KONDO Mitsumasa wrote:
> (2013/08/05 21:23), Tom Lane wrote:
> > Andres Freund <andres@2ndquadrant.com> writes:
> >> ... Also, there are global
> >> limits to the amount of filehandles that can simultaneously opened on a
> >> system.
> >
> > Yeah. Raising max_files_per_process puts you at serious risk that
> > everything else on the box will start falling over for lack of available
> > FD slots.
> Is it Really? When I use hadoop like NOSQL storage, I set large number of FD.
> Actually, Hadoop Wiki is writing following.
>
> http://wiki.apache.org/hadoop/TooManyOpenFiles
> > Too Many Open Files
> >
> > You can see this on Linux machines in client-side applications, server code or even in test runs.
> > It is caused by per-process limits on the number of files that a single user/process can have open, which was
introducedin the 2.6.27 kernel. The default value, 128, was chosen because "that should be enough".
The first paragraph (which you're quoting with 128) is talking about
epoll which we don't use. The second paragraph indeed talks about the
max numbers of fds. Of *one* process.
Postgres uses a *process* based model. So, max_files_per_process is about
the the number of fds in a single backend. You need to multiply it by
max_connections + a bunch to get to the overall number of FDs.
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services