Joshua D. Drake wrote:
> I agree. I have many people that want to purchase a SAN because someone
> told them that is what they need... Yet they can spend 20% of the cost
> on two external arrays and get incredible performance...
>
> We are seeing great numbers from the following config:
>
> (2) HP MS 30s (loaded) dual bus
> (2) HP 6402, one connected to each MSA.
>
> The performance for the money is incredible.
This raises some questions for me. I just budgeted for a san because I
need lots of storage for email/web systems and don't want to have a
bunch of local disks in each server requiring each server to have it's
own spares. The idea is that I can have a platform wide disk chassis
which requires only one set of spares and run my linux hosts diskless.
Since I am planing on buying the sanraid iscsi solution I would simply
boot hosts with pxelinux and pass a kernel/initrd image that would mount
the iscsi target as root. If a server fails, I simply change the mac
address in the bootp server then bring up a spare in it's place.
Now that I'm reading these messages about disk performance and sans,
it's got me thinking that this solution is not ideal for a database
server. Also, it appears that there are several people on the list that
have experience with sans so perhaps some of you can fill in some blanks
for me:
1. Is iscsi a decent way to do a san? How much performance do I loose
vs connecting the hosts directly with a fiber channel controller?
2. Would it be better to omit my database server from the san (or at
least the database storage) and stick with local disks? If so what
disks/controller card do I want? I use dell servers for everything so
it would be nice if the recommendation is a dell system, but doesn't
need to be. Overall I'm not very impressed with the LSI cards, but I'm
told the new ones are much better.
3. Anyone use the sanrad box? Is it any good? Seems like
consolidating disk space and disk spares platform wide is good idea, but
I've not used a san before so I'm nervous about it.
4. What would be the performance of SATA disks in a JBOD? If I got 8
200g disks and made 4 raid one mirrors in the jbod then striped them
together in the sanraid would that perform decent? Is there an
advantage splitting up raid 1+0 across the two boxes, or am I better
doing raid 1+0 in the jbod and using the sanrad as an iscsi translator?
Thats enough questions for now....
Thanks,
schu