Re: [GENERAL] Bottlenecks with large number of relation segment files - Mailing list pgsql-hackers

From KONDO Mitsumasa
Subject Re: [GENERAL] Bottlenecks with large number of relation segment files
Date
Msg-id 51FF5BC6.5000007@lab.ntt.co.jp
Whole thread Raw
In response to Bottlenecks with large number of relation segment files  (Amit Langote <amitlangote09@gmail.com>)
Responses Re: [GENERAL] Bottlenecks with large number of relation segment files  (Amit Langote <amitlangote09@gmail.com>)
List pgsql-hackers
Hi Amit,

(2013/08/05 15:23), Amit Langote wrote:
> May the routines in fd.c become bottleneck with a large number of
> concurrent connections to above database, say something like "pgbench
> -j 8 -c 128"? Is there any other place I should be paying attention
> to?
What kind of file system did you use?

When we open file, ext3 or ext4 file system seems to sequential search inode for
opening file in file directory.
And PostgreSQL limit FD 1000 per process. It seems too small.
Please change src/backend/storage/file/fd.c at "max_files_per_process = 1000;"
If we rewrite it, We can change limit of FD per process. I have already created
fix-patch about this problem in postgresql.conf, and will submit next CF.
Regards,
--
Mitsumasa KONDO
NTT Open Source Software Center



pgsql-hackers by date:

Previous
From: "Etsuro Fujita"
Date:
Subject: Re: query_planner() API change
Next
From: Amit Langote
Date:
Subject: Re: [GENERAL] Bottlenecks with large number of relation segment files