Re: Scaling with memory & disk planning - Mailing list pgsql-general

From Scott Marlowe
Subject Re: Scaling with memory & disk planning
Date
Msg-id Pine.LNX.4.33.0205301458530.16066-100000@css120.ihs.com
Whole thread Raw
In response to Re: Scaling with memory & disk planning  (terry@greatgulfhomes.com)
Responses Re: Scaling with memory & disk planning
List pgsql-general
On Thu, 30 May 2002 terry@greatgulfhomes.com wrote:

I agree with ALMOST everything you say, but have a few minor nits to pick.
nothing personal, just my own experience on RAID testing and such.

> In RAID5, the most efficient solution, every 1 byte written requires LESS
> then 1 byte written for the CRC.

This isn't true for any RAID 5 implementation I'm familiar with.  The
parity stripe is exactly the same size as the data stripes it shares space
with.  But this isn't a real important point, since most RAID arrays write
8k to 128k at a time.  Note that it's not CRC (Cyclic Redunancy Check)
that gets written, but straight XOR, hence no space savings.

> Roughly (depending on implementation,
> number of disks) every 3 bytes written requires 4 bytes of disk IO.
>
> RAID5 is the fastest from an algorithm, standpoint.  There is some gotchas,
> RAID5 implemented by hardware is faster the RAID5 implemented by OS, simply
> because the controller on the SCSI card acts like a parallel processor.

This is most definitely not always true, even given equal hardware specs
(i.e. number and type of drives / interfaces all the same).

My old AMI Megaraid card with 3 Ultra Wide SCSI ports can generate 64 Megs
of parity data per second.  My Celeron 1.1GHz machine can generate
2584 Megs of parity data per second.  The load on the CPU under VERY
heavy reads and writes is about 0.3% cpu, and the max throughput on reads
on a RAID array of 4 VERY old (non-ultra non-wide 7200RPM) 2 Gig drives is
about 33 Megs a second read speed.

Same setup with 7200 4 Gig Ultra narrow drives on an AMI raid can read at
about 14 Megabytes a second.

The most important part of fast RAID is the drives first, interface
second, and hardware versus software raid last.  While it was often true
in the dark past of 33 Megahertz CPUs and such that hardware raid was
always faster, it is often much better to spend the extra money a RAID
controller would cost you and just buy more drives or cheaper controllers
or memory or CPUs.

Generally speaking, I've found RAID5 with 4 or fewer drives to be about
even with RAID1, while RAID5 with 6 or more drives quickly starts to
outrun a two drive mirror set.  This is especially true under heavy
parallel access.

On a subject no one's mentioned yet, >2 drives in a RAID1 setup.

I've done some testing with >2 drives in a mirror (NOT 1+0 or 0+1, just
RAID1 with >2 drives) under Linux, and found that if you are doing 90%
reads then it's also a good solution, but for most real world database
apps, it doesn't really help a lot.



pgsql-general by date:

Previous
From: Dan Weston
Date:
Subject: Re: connection refused problem
Next
From: Steve Wranovsky
Date:
Subject: Re: Query plan w/ like clause question