Thread: RAID Controllers

RAID Controllers

From
David Boreham
Date:
I'm buying a bunch of new machines (all will run an application that heavily
writes to PG). These machines will have 2 spindle groups in a RAID-1 config.
Drives will be either 15K SAS, or 10K SATA (I haven't decided if it is
better
to buy the faster drives, or drives that are identical to the ones we are
already running in our production servers, thus achieving commonality in
spares across all machines).

Controller choice looks to be between Adaptec 6405, with the
supercapacitor unit;
or LSI 9260-4i with its BBU. Price is roughly the same.

Would be grateful for any thoughts on this choice.

Thanks.



Re: RAID Controllers

From
Scott Marlowe
Date:
On Mon, Aug 22, 2011 at 8:42 PM, David Boreham <david_list@boreham.org> wrote:
>
> I'm buying a bunch of new machines (all will run an application that heavily
> writes to PG). These machines will have 2 spindle groups in a RAID-1 config.
> Drives will be either 15K SAS, or 10K SATA (I haven't decided if it is
> better
> to buy the faster drives, or drives that are identical to the ones we are
> already running in our production servers, thus achieving commonality in
> spares across all machines).
>
> Controller choice looks to be between Adaptec 6405, with the supercapacitor
> unit;
> or LSI 9260-4i with its BBU. Price is roughly the same.
>
> Would be grateful for any thoughts on this choice.

If you're running linux and thus stuck with the command line on the
LSI, I'd recommend anything else.  MegaRAID is the hardest RAID
control software to use I've ever seen.  If you can spring for the
money, get the Areca 1680:
http://www.newegg.com/Product/Product.aspx?Item=N82E16816151023  Be
sure and get the battery unit for it.  You can configure it from an
external ethernet connector very easily, and the performance is
outstandingly good.

Re: RAID Controllers

From
Robert Schnabel
Date:
On 8/22/2011 9:42 PM, David Boreham wrote:
> I'm buying a bunch of new machines (all will run an application that heavily
> writes to PG). These machines will have 2 spindle groups in a RAID-1 config.
> Drives will be either 15K SAS, or 10K SATA (I haven't decided if it is
> better
> to buy the faster drives, or drives that are identical to the ones we are
> already running in our production servers, thus achieving commonality in
> spares across all machines).
>
> Controller choice looks to be between Adaptec 6405, with the
> supercapacitor unit;
> or LSI 9260-4i with its BBU. Price is roughly the same.
>
> Would be grateful for any thoughts on this choice.

I'm by no means an expert but it seems to me if you're going to choose
between two 6 GB/s cards you may as well put SAS2 drives in.  I have two
Adaptec 6445 cards in one of my boxes and several other Adaptec series 5
controllers in others.  They suit my needs and I haven't had any
problems with them.  I think it has been mentioned previously but they
do tend to run hot so plenty of airflow would be good.

Bob


Re: RAID Controllers

From
David Boreham
Date:
On 8/23/2011 5:14 AM, Robert Schnabel wrote:
>
> I'm by no means an expert but it seems to me if you're going to choose
> between two 6 GB/s cards you may as well put SAS2 drives in.  I have
> two Adaptec 6445 cards in one of my boxes and several other Adaptec
> series 5 controllers in others.  They suit my needs and I haven't had
> any problems with them.  I think it has been mentioned previously but
> they do tend to run hot so plenty of airflow would be good.

Thanks. Good point about airflow. By SAS I meant 6Gbit SAS drives. But
we have many servers already with 10k raptors and it is tempting to use
those since we would be able to use a common pool of spare drives across
all servers. 15K rpm is tempting though. I'm not sure if the DB
transaction commit rate scales up linearly when BBU is used (it would
without BBU).



Re: RAID Controllers

From
Alan Hodgson
Date:
On August 22, 2011 09:55:33 PM Scott Marlowe wrote:
>
> If you're running linux and thus stuck with the command line on the
> LSI, I'd recommend anything else.  MegaRAID is the hardest RAID
> control software to use I've ever seen.  If you can spring for the
> money, get the Areca 1680:
> http://www.newegg.com/Product/Product.aspx?Item=N82E16816151023  Be
> sure and get the battery unit for it.  You can configure it from an
> external ethernet connector very easily, and the performance is
> outstandingly good.

I second the Areca recommendation - excellent controllers. The 1880s are even
better.

Re: RAID Controllers

From
David Boreham
Date:
On 8/22/2011 10:55 PM, Scott Marlowe wrote:
> If you're running linux and thus stuck with the command line on the
> LSI, I'd recommend anything else.  MegaRAID is the hardest RAID
> control software to use I've ever seen.  If you can spring for the
> money, get the Areca 1680:
> http://www.newegg.com/Product/Product.aspx?Item=N82E16816151023  Be
> sure and get the battery unit for it.  You can configure it from an
> external ethernet connector very easily, and the performance is
> outstandingly good.
Thanks. I took a look at Areca. The fan on the controller board is a big
warning signal for me (those fans are in my experience the single most
unreliable component ever used in computers).

Can you say a bit more about the likely problems with the CLI ?
I'm thinking that I configure the card once, and copy the config
to all the other boxes, so even if it's as obscure as Cisco IOS,
how bad can it be ? Is the concern more with things like a rebuild;
monitoring for drive failures -- that kind of constant management
task ?

How about Adaptec on Linux ? The supercapacitor and NAND
flash idea looks like a good one, provided the firmware doesn't
have bugs (true with any write back controller though).



Re: RAID Controllers

From
Scott Marlowe
Date:
On Tue, Aug 23, 2011 at 4:42 PM, David Boreham <david_list@boreham.org> wrote:
> On 8/22/2011 10:55 PM, Scott Marlowe wrote:
>>
>> If you're running linux and thus stuck with the command line on the
>> LSI, I'd recommend anything else.  MegaRAID is the hardest RAID
>> control software to use I've ever seen.  If you can spring for the
>> money, get the Areca 1680:
>> http://www.newegg.com/Product/Product.aspx?Item=N82E16816151023  Be
>> sure and get the battery unit for it.  You can configure it from an
>> external ethernet connector very easily, and the performance is
>> outstandingly good.
>
> Thanks. I took a look at Areca. The fan on the controller board is a big
> warning signal for me (those fans are in my experience the single most
> unreliable component ever used in computers).

I've been using Arecas for years. A dozen or more.  Zero fan failures.
 1 bad card, it came bad.

> Can you say a bit more about the likely problems with the CLI ?

The MegaCLI interface is the single most difficult user interface I've
ever used.  Non-obvious and difficult syntax,  google it.  You'll get
plenty of hits.

> I'm thinking that I configure the card once, and copy the config
> to all the other boxes, so even if it's as obscure as Cisco IOS,

I've dealt with IOS and it's super easy to work with compared to MegaCLI.

> how bad can it be ? Is the concern more with things like a rebuild;
> monitoring for drive failures -- that kind of constant management
> task ?

All of it.  I've used it before just enough to never want to touch it
again.  There's a cheat sheet here:
http://tools.rapidsoft.de/perc/perc-cheat-sheet.html

> How about Adaptec on Linux ? The supercapacitor and NAND
> flash idea looks like a good one, provided the firmware doesn't
> have bugs (true with any write back controller though).

I haven't used the newer cards.  Older ones had a bad rep for
performance but apparently their newer ones can be pretty darned good.

Re: RAID Controllers

From
Greg Smith
Date:
On 08/23/2011 06:42 PM, David Boreham wrote:
> I took a look at Areca. The fan on the controller board is a big
> warning signal for me (those fans are in my experience the single most
> unreliable component ever used in computers).

I have one of their really early/cheap models here, purchased in early
2007 .  The fan on it just died last month.  Since this is my home
system, I just got a replacement at Radio Shack and spliced the right
connector onto it; had it been a production server I would have bought a
spare fan with the system.

To put this into perspective, that system is on its 3rd power supply,
and has gone through at least 4 drive failures since installation.

> Can you say a bit more about the likely problems with the CLI ?

Let's see...this week I needed to figure out how to turn off the
individual drive caches on a LSI system, they are set at the factory to
"use disk's default" which is really strange--leaves me not even sure
what state that is.  The magic incantation for that one was:

MegaCli -LDSetProp DisDskCache -LALL -aALL

There's a certainly a learning curve there.

> I'm thinking that I configure the card once, and copy the config
> to all the other boxes, so even if it's as obscure as Cisco IOS,
> how bad can it be ? Is the concern more with things like a rebuild;
> monitoring for drive failures -- that kind of constant management
> task ?

You can't just copy the configurations around.  All you have are these
low-level things that fix individual settings.  To get the same
configuration on multiple systems, you need to script all of the
changes, and hope that all of the systems ship with the same defaults.

What I do is dump the entire configuration and review that carefully for
each deployment.  It helps to have a checklist and patience.

> How about Adaptec on Linux ? The supercapacitor and NAND
> flash idea looks like a good one, provided the firmware doesn't
> have bugs (true with any write back controller though).

I only have one server with a recent Adaptec controller, a 5405.  That
seemed to be the generation of cards where Adaptec got their act
together on Linux again, they benchmarked well in reviews and the
drivers seem reasonable.  It's worked fine for the small server it's
deployed in.  I haven't been able to test a larger array with one of
them yet, but it sounds like you're not planning to run one of those
anyway.  If I had 24 drives to connect, I'd prefer an LSI controller
just because I know those scale fine to that level; I'm not sure how
well Adaptec does there.  Haven't found anyone brave enough to try that
test yet.

--
Greg Smith   2ndQuadrant US    greg@2ndQuadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us