I have several databases. They are each about 35gb in size and have about 10.5K relations (count from pg_stat_all_tables) in them. Pg_class is about 26k rows and the data directory contains about 70k files. These are busy machines, they run about 50 xactions per second, ( aproxx insert / update / delete about 500 rows per second).
We started getting errors about the number of open file descriptors
: 2007-05-09 03:07:50.083 GMT 1146975740: LOG: 53000: out of file descriptors: Too many open files; release and retry
2007-05-09 03:07:50.083 GMT 1146975740: CONTEXT: SQL statement "insert ….. "
PL/pgSQL function "trigfunc_whatever" line 50 at execute statement
2007-05-09 03:07:50.083 GMT 1146975740: LOCATION: BasicOpenFile, fd.c:471
2007-05-09 03:07:50.091 GMT 0: LOG: 00000: duration: 12.362 ms
2007-05-09 03:07:50.091 GMT 0: LOCATION: exec_simple_query, postgres.c:1090
So we decreased the max_files_per_process to 800. This took care of the error *BUT* about quadrupled the IO wait that is happening on the machine. It went from a peek of about 50% to peeks of over 200% (4 processor machines, 4 gigs ram, raid). The load on the machine remained constant.