>I think the main difference is that the WAL activity is mostly linear,
where the normal data activity is rather random access.
That was what I was expecting, and after reading
http://www.pcguide.com/ref/hdd/perf/raid/concepts/perfStripe-c.html I
figured that a different stripe size for the WAL set could be worth
investigating. I have now dropped the old sets (10+18) and created two
new raid1+0 sets (4 for WAL, 24 for data) instead. Bonnie++ is still
running, but I'll post the numbers as soon as it has finished. I did
actually use different stripe sizes for the sets as well, 8k for the WAL
disks and 64k for the data. It's quite painless to do these things with
HBAnywhere, so it's no big deal if I have to go back to another
configuration. The battery cache only has 256Mb though and that botheres
me, I assume a larger (512Mb - 1Gb) cache would make quite a difference.
Oh well.
>Btw, it may make sense to spread different tables or tables and indices
onto different Raid-Sets, as you seem to have enough spindles.
This is something I'd also would like to test, as a common best-practice
these days is to go for a SAME (stripe all, mirror everything) setup.
From a development perspective it's easier to use SAME as the developers
won't have to think about physical location for new tables/indices, so
if there's no performance penalty with SAME I'll gladly keep it that
way.
>And look into the commit_delay/commit_siblings settings, they allow you
to deal latency for throughput (means a little more latency per
transaction, but much more transactions per second throughput for the
whole system.)
In a previous test, using cd=5000 and cs=20 increased transaction
throughput by ~20% so I'll definitely fiddle with that in the coming
tests as well.
Regards,
Mikael.