Michael Stone <mstone+postgres@mathom.us> writes:
> xfs' slowness is proportional to the *number* rather than the *size* of
> the files. In postgres you'll tend to have fewer, larger, files than you
> would in (e.g.) a source code repository, so it is generally more
> important to have a filesystem that deletes large files quickly than a
> filesystem that deletes lots of files quickly.
The weird thing is that the files in question were hardly "large".
IIRC his test case used a single int4 column, so the rows were probably
36 bytes apiece allowing for all overhead. So the test cases with about
5K rows were less than 200K in the file, and the ones with 200K rows
were still only a few megabytes.
I tried the test on my Linux machine (which I couldn't do when I
responded earlier because it was tied up with another test), and
saw truncate times of a few milliseconds for both table sizes.
This is ext3 on Fedora 6.
So I'm still of the opinion that there's something broken about
Adriaan's infrastructure, but maybe we have to look to an even
lower level than the filesystem. Perhaps he should try getting
some bonnie++ benchmark numbers to see if his disk is behaving
properly.
regards, tom lane