Thread: Disk storage and san questions (was File Systems Compared)

Disk storage and san questions (was File Systems Compared)

From
Matthew Schumacher
Date:
Joshua D. Drake wrote:
> I agree. I have many people that want to purchase a SAN because someone
> told them that is what they need... Yet they can spend 20% of the cost
> on two external arrays and get incredible performance...
>
> We are seeing great numbers from the following config:
>
> (2) HP MS 30s (loaded) dual bus
> (2) HP 6402, one connected to each MSA.
>
> The performance for the money is incredible.

This raises some questions for me.  I just budgeted for a san because I
need lots of storage for email/web systems and don't want to have a
bunch of local disks in each server requiring each server to have it's
own spares.  The idea is that I can have a platform wide disk chassis
which requires only one set of spares and run my linux hosts diskless.
 Since I am planing on buying the sanraid iscsi solution I would simply
boot hosts with pxelinux and pass a kernel/initrd image that would mount
the iscsi target as root.  If a server fails, I simply change the mac
address in the bootp server then bring up a spare in it's place.

Now that I'm reading these messages about disk performance and sans,
it's got me thinking that this solution is not ideal for a database
server.  Also, it appears that there are several people on the list that
have experience with sans so perhaps some of you can fill in some blanks
for me:

1.  Is iscsi a decent way to do a san?  How much performance do I loose
 vs connecting the hosts directly with a fiber channel controller?

2.  Would it be better to omit my database server from the san (or at
least the database storage) and stick with local disks?  If so what
disks/controller card do I want?  I use dell servers for everything so
it would be nice if the recommendation is a dell system, but doesn't
need to be.  Overall I'm not very impressed with the LSI cards, but I'm
told the new ones are much better.

3.  Anyone use the sanrad box?  Is it any good?  Seems like
consolidating disk space and disk spares platform wide is good idea, but
I've not used a san before so I'm nervous about it.

4.  What would be the performance of SATA disks in a JBOD?  If I got 8
200g disks and made 4 raid one mirrors in the jbod then striped them
together in the sanraid would that perform decent?  Is there an
advantage splitting up raid 1+0 across the two boxes, or am I better
doing raid 1+0 in the jbod and using the sanrad as an iscsi translator?

Thats enough questions for now....

Thanks,
schu


Re: Disk storage and san questions (was File Systems Compared)

From
"Bucky Jordan"
Date:
I was working on a project that was considering using a Dell/EMC (dell's
rebranded emc hardware) and here's some thoughts on your questions based
on that.

> 1.  Is iscsi a decent way to do a san?  How much performance do I
loose
>  vs connecting the hosts directly with a fiber channel controller?
It's cheaper, but if you want any sort of reasonable performance, you'll
need a dedicated gigabit network. I'd highly recommend a dedicated
switch too, not just vlan. You should also have dual nics, and use one
dedicated to iSCSI. Most all poweredges come with dual nics these days.

>
> 2.  Would it be better to omit my database server from the san (or at
> least the database storage) and stick with local disks?  If so what
> disks/controller card do I want?  I use dell servers for everything so
> it would be nice if the recommendation is a dell system, but doesn't
> need to be.  Overall I'm not very impressed with the LSI cards, but
I'm
> told the new ones are much better.
The new dell perc4, and perc5 to more extent, are reasonable performers
in my experience. However, this depends on the performance needs of your
database. You should be able to at least get better performance than
onboard storage (Poweredges max out at 6 disks- 8 if you go 2.5" SATA,
but I don't recommend those for reliability/performance reasons). If you
get one of the better Dell/EMC combo sans, you can allocate a raid pool
for your database and probably saturate the iSCSI interface. Next step
might be the MD1000 15 disk SAS enclosure with Perc5/e cards if you're
sticking with dell, or step up to multi-homed FC cards. (btw- you can
split the MD1000 in half and share it across two servers, since it has
two scsi cards. You can also daisy chain up to three of them for a total
of 45 disks). Either way, take a good look at what the SAN chassis can
support in terms of IO bandwidth- cause once you use it up, there's no
more to allocate to the DB.

>
> 3.  Anyone use the sanrad box?  Is it any good?  Seems like
> consolidating disk space and disk spares platform wide is good idea,
but
> I've not used a san before so I'm nervous about it.
>
If you haven't used a san, much less an enterprise grade one, then I'd
be very nervous about them too. Optimizing SAN performance is much more
difficult than attached storage simply due to the complexity factor.
Definitely plan on a pretty steep learning curve, especially for
something like EMC and a good number of servers.

IMO, the big benefit to SAN is storage management and utilization, not
necessarily performance (you can get decent performance if you buy the
right hardware and tune it correctly). To your points- you can reduce
the number of hot spares, and allocate storage much more efficiently.
Also, you can allocate storage pools based on performance needs- slow
SATA 500Gb drives for archive, fast 15K SAS for db, etc. There's some
nice failover options too, as you mentioned boot from san allows you to
swap hardware, but I would get a demonstration from the vendor of this
working with your hardware/os setup (including booting up the cold spare
server). I know this was a big issue in some of the earlier Dell/EMC
hardware.

Sorry for the long post, but hopefully some of the info will be useful
to you.

Bucky