Re: Bottlenecks with large number of relation segment files - Mailing list pgsql-general

From Andres Freund
Subject Re: Bottlenecks with large number of relation segment files
Date
Msg-id 20130805102841.GD542@alap2.anarazel.de
Whole thread Raw
In response to Re: Bottlenecks with large number of relation segment files  (KONDO Mitsumasa <kondo.mitsumasa@lab.ntt.co.jp>)
Responses Re: Bottlenecks with large number of relation segment files  (Tom Lane <tgl@sss.pgh.pa.us>)
Re: Bottlenecks with large number of relation segment files  (KONDO Mitsumasa <kondo.mitsumasa@lab.ntt.co.jp>)
List pgsql-general
On 2013-08-05 18:40:10 +0900, KONDO Mitsumasa wrote:
> (2013/08/05 17:14), Amit Langote wrote:
> >So, within the limits of max_files_per_process, the routines of file.c
> >should not become a bottleneck?
> It may not become bottleneck.
> 1 FD consumes 160 byte in 64bit system. See linux manual at "epoll".

That limit is about max_user_watches, not the general cost of an
fd. Afair they take up a a good more than that. Also, there are global
limits to the amount of filehandles that can simultaneously opened on a
system.


Greetings,

Andres Freund

--
 Andres Freund                       http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


pgsql-general by date:

Previous
From: Krzysztof xaru Rajda
Date:
Subject: [tsearch2] Problem with case sensitivity (or with creating own dictionary)
Next
From: hamann.w@t-online.de
Date:
Subject: Re: incremental dumps