>
> For example, my file system can have 1064960 files in it (one can find
> that out with fsck), but the practical constraint will be much
> lower. I find it difficult to deal with 30000 files in one directory,
> for example. The time it takes to open a file in such directory is
> usually on the order of seconds.
I don't know if this info is useful, but;
I have a directory with 150K files in it, and can open files in it without
noticable delay (this is not through postgres). Before setting up this dir,
i tried to find out the limit of how many files is sensible. I couldn't
find any info so just thought i'd try it, and it appears to work fine. If
you think I am going to hit problems (I am serving jpgs from this directory
for a web site), please let me know.
running on
Red Hat Linux release 5.2 (Apollo)
Kernel 2.0.36 on an i686
cheers
timj