On Fri, 18 Jul 2003, Ang Chin Han wrote:
> Shridhar Daithankar wrote:
> > On 17 Jul 2003 at 10:41, Nick Fankhauser wrote:
> >
> >>I'm using ext2. For now, I'll leave this and the OS version alone. If I
> >
> >
> > I appreciate your approach but it almost proven that ext2 is not the best and
> > fastest out there.
>
> Agreed.
Huh? How can journaled file systems hope to outrun a simple unjournaled
file system? There's just less overhead for ext2 so it's quicker, it's
just not as reliable.
I point you to this link from IBM:
http://www-124.ibm.com/developerworks/opensource/linuxperf/iozone/iozone.php
While ext3 is a clear loser to jfs and rfs, ext2 wins most of the contests
against both reiser and jfs. Note that xfs wasn't tested here. But in
general, ext2 is quite fast nowadays.
>
> > IMO, you can safely change that to reiserfs or XFS. Or course, testing is
> > always recommended.
>
> We've been using ext3fs for our production systems. (Red Hat Advanced
> Server 2.1)
>
> And since your (Nick) system is based on Debian, I have done some rough
> testing on Debian sarge (testing) (with custom 2.4.20) with ext3fs,
> reiserfs and jfs. Can't get XFS going easily on Debian, though.
>
> I used a single partition mkfs'd with ext3fs, reiserfs and jfs one after
> the other on an IDE disk. Ran pgbench and osdb-x0.15-0 on it.
>
> jfs's has been underperforming for me. Somehow the CPU usage is higher
> than the other two. As for ext3fs and reiserfs, I can't detect any
> significant difference. So if you're in a hurry, it'll be easier to
> convert your ext2 to ext3 (using tune2fs) and use that. Otherwise, it'd
> be nice if you could do your own testing, and post it to the list.
I would like to see some tests on how they behave on top of large fast
RAID arrays, like a 10 disk RAID5 or something. It's likely that on a
single IDE drive the most limiting factor is the bandwidth of the drive,
whereas on a large array, the limiting factor would likely be the file
system code.