On Thu, 2003-07-24 at 13:29, Greg Stark wrote:
> "scott.marlowe" <scott.marlowe@ihs.com> writes:
>
> > If you are writing 4k out to a RAID5 of 10 disks, this is what happens:
> >
> > (assumiung 64k stipes...)
> > READ data stripe (64k read)
> > READ parity stripe (64k read)
> > make changes to data stripe
> > XOR new data stripe with old parity stripe to get a new parity stripe
> > write new parity stripe (64k)
> > write new data stripe (64k)
> >
> > So it's not as bad as you might think.
>
> The main negative for RAID5 is that it had to do that extra READ. If you're
> doing lots of tiny updates then the extra latency to have to go read the
> parity block before it can write the parity block out is a real killer. For
> that reason people prefer 0+1 for OLTP systems.
>
> But you have to actually test your setup in practice to see if it hurts. A big
> data warehousing system will be faster under RAID5 than under RAID1+0 because
> of the extra disks in the stripeset. The more disks in the stripeset the more
> bandwidth you get.
>
> Even for OLTP systems I've had success with RAID5 or not depending largely on
> the quality of the implementation. The Hitachi systems were amazing. They had
> enough battery backed cache that the extra latency for the parity read/write
> cycle really never showed up at all. But it had a lot more than 128M. I think
> it had 1G and could be expanded.
Your last paragraph just stole the objection to the 1st paragraph
right out of my mouth, since enough cache will allow it to "batch"
all those tiny updates into big updates. But those Hitachi controllers
weren't plugged into x86-type boxen, were they?
--
+-----------------------------------------------------------------+
| Ron Johnson, Jr. Home: ron.l.johnson@cox.net |
| Jefferson, LA USA |
| |
| "I'm not a vegetarian because I love animals, I'm a vegetarian |
| because I hate vegetables!" |
| unknown |
+-----------------------------------------------------------------+