> I am working on web portal. There are some ads. We have about 200 000
> ads. Every ad have own directory called ID, where is 5 subdirectories
> with various sizes of 5 images.
>
> Filesystem is too slow. But I don't know, if I store these images into
> postgres, performace will grow.
Certainly the problem you are experiencing is because you have 200,000
directories, and directory lookups are not scaling well.
I had a look at this a few weeks ago for an email storage application.
Using a filesystem with better directory lookup performance (? xfs,
resiserfs, jfs)is one obvious solution, as is storing the data in the
database. If you want to use files in an ext2/3 filesystem, you need to
break up the directories into a hierarchy.
I did some web research trying to find numbers for how many entries you
can get away with in an ext2/3 filesystem before the lookup time starts
to bite. I didn't find very much useful data. The best answer I got
was "between 100 and 1000". Since my identifiers are decimail numbers,
I had a choice of breaking them up into groups of two or three (i.e.
12/34/56/78 or 012/345/678). I went for groups of two and it works
well. Certainly this is not the limiting factor in the system as a
whole.
Looking back, I wonder if I should have gone for groups of three. Is
the lookup time a function of the number of entries in the directory, or
the size (in bytes) of the directory? Since my directory names are
short in this scheme, I get more directory entries per disk block.
One other thing to mention: have you turned off access time (atime)
logging for the filesystem? (man mount)
--Phil.