Re: [GENERAL] Bottlenecks with large number of relation segment files - Mailing list pgsql-hackers

From Amit Langote
Subject Re: [GENERAL] Bottlenecks with large number of relation segment files
Date
Msg-id CA+HiwqGV5pjEps2fxw=Vs9+SwbChwqhyJTQsAF7tG214TAUkgw@mail.gmail.com
Whole thread Raw
In response to Re: [GENERAL] Bottlenecks with large number of relation segment files  (KONDO Mitsumasa <kondo.mitsumasa@lab.ntt.co.jp>)
Responses Re: [GENERAL] Bottlenecks with large number of relation segment files  (KONDO Mitsumasa <kondo.mitsumasa@lab.ntt.co.jp>)
List pgsql-hackers
On Mon, Aug 5, 2013 at 5:01 PM, KONDO Mitsumasa
<kondo.mitsumasa@lab.ntt.co.jp> wrote:
> Hi Amit,
>
>
> (2013/08/05 15:23), Amit Langote wrote:
>>
>> May the routines in fd.c become bottleneck with a large number of
>> concurrent connections to above database, say something like "pgbench
>> -j 8 -c 128"? Is there any other place I should be paying attention
>> to?
>
> What kind of file system did you use?
>
> When we open file, ext3 or ext4 file system seems to sequential search inode
> for opening file in file directory.
> And PostgreSQL limit FD 1000 per process. It seems too small.
> Please change src/backend/storage/file/fd.c at "max_files_per_process =
> 1000;"
> If we rewrite it, We can change limit of FD per process. I have already
> created fix-patch about this problem in postgresql.conf, and will submit
> next CF.

Thank you for replying Kondo-san.
The file system is ext4.
So, within the limits of max_files_per_process, the routines of file.c
should not become a bottleneck?


--
Amit Langote


pgsql-hackers by date:

Previous
From: KONDO Mitsumasa
Date:
Subject: Re: [GENERAL] Bottlenecks with large number of relation segment files
Next
From: Amit Kapila
Date:
Subject: Re: Re: ALTER SYSTEM SET command to change postgresql.conf parameters (RE: Proposal for Allow postgresql.conf values to be changed via SQL [review])