Thread: SAN/NAS options
Hello all, It seems that I'm starting to outgrow our current Postgres setup. We've been running a handful of machines as standalone db servers. This is all in a colocation environment, so everything is stuffed into 1U Supermicro boxes. Our standard build looks like this: Supermicro 1U w/SCA backplane and 4 bays 2x2.8 GHz Xeons Adaptec 2015S "zero channel" RAID card 2 or 4 x 73GB Seagate 10K Ultra 320 drives (mirrored+striped) 2GB RAM FreeBSD 4.11 PGSQL data from 5-10GB per box Recently I started studying what we were running up against in our nightly runs that do a ton of updates/inserts to prep things for the tasks the db does during the business day (light mix of selects/inserts/updates). While we have plenty of disk bandwidth (according to bonnie), we are really dying on IOPS. I'm guessing this is a mix of a rather anemic RAID controller (ever notice how adaptec doesn't publish any real performance specs on raid cards?) and having only two or four spindles (effectively 1 or 2 on writes). So that's where we are... I'm new to the whole SAN thing, but did recently pick up a few used NetApp shelves and a Fibre Channel RAID HBA (Mylex ExtremeRAID 3000, also used) to toy with. I started wondering if I could put something together to both get our storage on one set of boxes and allow me to get data striped across more drives. Our budget is not huge and we are not adverse to getting used gear where appropriate. What do you folks recommend? I'm just starting to look at what's out there for SANs and NAS, and from what I've seen, our options are: NetApp Filers - the pluses with these are that if we use NFS, we don't have to worry about either large filesystem support in FreeBSD (2TB practical limit), or limitation on "growing" partitions as the NetApp just deals with that. I also understand these make backups a bit simpler. I have a great, trusted, spare-stocking source for these. Apple X-Serve RAID - well, it's pretty cheap. Honestly, that's all I know about it - they don't talk about IOPS numbers, and I have no idea what lurks in that box as a RAID controller. SAN box w/integrated RAID - it seems like this might not be a good choice since the RAID hardware in the box may be where I hit any limits. I also imagine I'm probably overpaying for some OEM RAID controller integrated into the box. No idea where to look for used gear. SAN box, JBOD - this seems like it might be affordable as well. A few big shelves full of drives a SAN "switch" to plug all the shelves and hosts into and a FC RAID card in each host. No idea where to look for used gear here either. You'll note that I'm being somewhat driven by my OS of choice, FreeBSD. Unlike Solaris or other commercial offerings, there is no nice volume management available. While I'd love to keep managing a dozen or so FreeBSD boxes, I could be persuaded to go to Solaris x86 if the volume management really shines and Postgres performs well on it. Lastly, one thing that I'm not yet finding in trying to educate myself on SANs is a good overview of what's come out in the past few years that's more affordable than the old big-iron stuff. For example I saw some brief info on this list's archives about the Dell/EMC offerings. Anything else in that vein to look at? I hope this isn't too far off topic for this list. Postgres is the main application that I'm looking to accomodate. Anything else I can do with whatever solution we find is just gravy... Thanks! Charles
Charles, > Lastly, one thing that I'm not yet finding in trying to > educate myself on SANs is a good overview of what's come out > in the past few years that's more affordable than the old > big-iron stuff. For example I saw some brief info on this > list's archives about the Dell/EMC offerings. Anything else > in that vein to look at? My two cents: SAN is a bad investment, go for big internal storage. The 3Ware or Areca SATA RAID adapters kick butt and if you look in the newest colos (I was just in ours "365main.net" today), you will see rack on rack of machines with from 4 to 16 internal SATA drives. Are they all DB servers? Not necessarily, but that's where things are headed. You can get a 3U server with dual opteron 250s, 16GB RAM and 16x 400GB SATAII drives with the 3Ware 9550SX controller for $10K - we just ordered 4 of them. I don't think you can buy an external disk chassis and a Fibre channel NIC for that. Performance? 800MB/s RAID5 reads, 400MB/s RAID5 writes. Random IOs are also very high for RAID10, but we don't use it so YMMV - look at Areca and 3Ware. Managability? Good web management interfaces with 6+ years of development from 3Ware, e-mail, online rebuild options, all the goodies. No "snapshot" or offline backup features like the high-end SANs, but do you really need it? Need more power or storage over time? Run a parallel DB like Bizgres MPP, you can add more servers with internal storage and increase your I/O, CPU and memory. - Luke
Charles Sprickman wrote: > Hello all, > > It seems that I'm starting to outgrow our current Postgres setup. We've > been running a handful of machines as standalone db servers. This is > all in a colocation environment, so everything is stuffed into 1U > Supermicro boxes. Our standard build looks like this: > > Supermicro 1U w/SCA backplane and 4 bays > 2x2.8 GHz Xeons > Adaptec 2015S "zero channel" RAID card > 2 or 4 x 73GB Seagate 10K Ultra 320 drives (mirrored+striped) > 2GB RAM > FreeBSD 4.11 > PGSQL data from 5-10GB per box > > Recently I started studying what we were running up against in our > nightly runs that do a ton of updates/inserts to prep things for the > tasks the db does during the business day (light mix of > selects/inserts/updates). While we have plenty of disk bandwidth > (according to bonnie), we are really dying on IOPS. I'm guessing this is > a mix of a rather anemic RAID controller (ever notice how adaptec > doesn't publish any real performance specs on raid cards?) and having > only two or four spindles (effectively 1 or 2 on writes). > > So that's where we are... > > I'm new to the whole SAN thing, but did recently pick up a few used > NetApp shelves and a Fibre Channel RAID HBA (Mylex ExtremeRAID 3000, > also used) to toy with. I started wondering if I could put something > together to both get our storage on one set of boxes and allow me to get > data striped across more drives. Our budget is not huge and we are not > adverse to getting used gear where appropriate. > > What do you folks recommend? I'm just starting to look at what's out > there for SANs and NAS, and from what I've seen, our options are: > Leaving the whole SAN issue for a moment: It would be interesting to see if moving to FreeBSD 6.0 would help you - the vfs layer is no longer throttled by the (SMP) GIANT lock in this version, and that may make quite a difference (given you have SMP boxes). Another interesting thing to try is rebuilding the database ufs filesystem(s) with 32K blocks and 4K frags (as opposed to 8K/1K or 16K/2K - can't recall the default on 4.x). I found this to give a factor of 2 speedup on random disk access (specifically queries doing indexed joins). Is it mainly your 2 disk machines that are IOPS bound? if so, a cheap option may be to buy 2 more cheetahs for them! If it's the 4's, well how about a 2U U320 diskpack from whomever supplies you the Supermicro boxes? I have just noticed Luke's posting - I would second the advice to avoid SAN - in my experience it's an expensive way to buy storage. best wishes Mark
The Apple is, as you say, cheap (except, the Apple markup on the disks fuzzes that a bit). Its easy to set up, and has been quite reliable for me, but do not expect anything resembling good DB performance out of it (I gave up running anything but backup DBs on it). From the mouth of Apple guys, it (and Xsan) are heavily optimized for sequential access. They want to sell piles of these to the music/film industry, where they have some cred. Oracle has apparently gotten some performance gains through raw device pixie dust and voodoo, but even as a (reluctant, kicking-and-screaming) Oracle guy I wouldn't go there. Other goofy things about it: it isn't 1 device with 14 disks and redundant controllers. Its 2 7 disk arrays with non-redundant controllers. It doesn't do RAID10. If you want a gob-o-space with no performance requirements, its fine. Otherwise... On 12/14/05 1:56 AM, "Charles Sprickman" <spork@bway.net> wrote: > Hello all, > > It seems that I'm starting to outgrow our current Postgres setup. We've been > running a handful of machines as standalone db servers. This is all in a > colocation environment, so everything is stuffed into 1U Supermicro boxes. > Our > standard build looks like this: > > Supermicro 1U w/SCA backplane and 4 bays > 2x2.8 GHz Xeons > Adaptec 2015S "zero channel" RAID card > 2 or 4 x 73GB Seagate 10K Ultra 320 drives (mirrored+striped) > 2GB RAM > FreeBSD 4.11 > PGSQL data from 5-10GB per box > > Recently I started studying what we were running up against in our nightly > runs > that do a ton of updates/inserts to prep things for the tasks the db does > during the business day (light mix of selects/inserts/updates). While we have > plenty of disk bandwidth (according to bonnie), we are really dying on IOPS. > I'm guessing this is a mix of a rather anemic RAID controller (ever notice how > adaptec doesn't publish any real performance specs on raid cards?) and having > only two or four spindles (effectively 1 or 2 on writes). > > So that's where we are... > > I'm new to the whole SAN thing, but did recently pick up a few used NetApp > shelves and a Fibre Channel RAID HBA (Mylex ExtremeRAID 3000, also used) to > toy > with. I started wondering if I could put something together to both get our > storage on one set of boxes and allow me to get data striped across more > drives. Our budget is not huge and we are not adverse to getting used gear > where appropriate. > > What do you folks recommend? I'm just starting to look at what's out there > for > SANs and NAS, and from what I've seen, our options are: > > NetApp Filers - the pluses with these are that if we use NFS, we don't have to > worry about either large filesystem support in FreeBSD (2TB practical limit), > or limitation on "growing" partitions as the NetApp just deals with that. I > also understand these make backups a bit simpler. I have a great, trusted, > spare-stocking source for these. > > Apple X-Serve RAID - well, it's pretty cheap. Honestly, that's all I know > about it - they don't talk about IOPS numbers, and I have no idea what lurks > in > that box as a RAID controller. > > SAN box w/integrated RAID - it seems like this might not be a good choice > since > the RAID hardware in the box may be where I hit any limits. I also imagine > I'm > probably overpaying for some OEM RAID controller integrated into the box. No > idea where to look for used gear. > > SAN box, JBOD - this seems like it might be affordable as well. A few big > shelves full of drives a SAN "switch" to plug all the shelves and hosts into > and a FC RAID card in each host. No idea where to look for used gear here > either. > > You'll note that I'm being somewhat driven by my OS of choice, FreeBSD. Unlike > Solaris or other commercial offerings, there is no nice volume management > available. While I'd love to keep managing a dozen or so FreeBSD boxes, I > could be persuaded to go to Solaris x86 if the volume management really shines > and Postgres performs well on it. > > Lastly, one thing that I'm not yet finding in trying to educate myself on SANs > is a good overview of what's come out in the past few years that's more > affordable than the old big-iron stuff. For example I saw some brief info on > this list's archives about the Dell/EMC offerings. Anything else in that vein > to look at? > > I hope this isn't too far off topic for this list. Postgres is the main > application that I'm looking to accomodate. Anything else I can do with > whatever solution we find is just gravy... > > Thanks! > > Charles > > > ---------------------------(end of broadcast)--------------------------- > TIP 5: don't forget to increase your free space map settings
On Wed, Dec 14, 2005 at 11:53:52AM -0500, Andrew Rawnsley wrote: >Other goofy things about it: it isn't 1 device with 14 disks and redundant >controllers. Its 2 7 disk arrays with non-redundant controllers. It doesn't >do RAID10. And if you want hot spares you need *two* per tray (one for each controller). That definately changes the cost curve. :) Mike Stone
Luke, How did you measure 800MB/sec, is it cached, or physical I/O? -anjan -----Original Message----- From: Luke Lonergan [mailto:LLonergan@greenplum.com] Sent: Wednesday, December 14, 2005 2:10 AM To: Charles Sprickman; pgsql-performance@postgresql.org Subject: Re: [PERFORM] SAN/NAS options Charles, > Lastly, one thing that I'm not yet finding in trying to > educate myself on SANs is a good overview of what's come out > in the past few years that's more affordable than the old > big-iron stuff. For example I saw some brief info on this > list's archives about the Dell/EMC offerings. Anything else > in that vein to look at? My two cents: SAN is a bad investment, go for big internal storage. The 3Ware or Areca SATA RAID adapters kick butt and if you look in the newest colos (I was just in ours "365main.net" today), you will see rack on rack of machines with from 4 to 16 internal SATA drives. Are they all DB servers? Not necessarily, but that's where things are headed. You can get a 3U server with dual opteron 250s, 16GB RAM and 16x 400GB SATAII drives with the 3Ware 9550SX controller for $10K - we just ordered 4 of them. I don't think you can buy an external disk chassis and a Fibre channel NIC for that. Performance? 800MB/s RAID5 reads, 400MB/s RAID5 writes. Random IOs are also very high for RAID10, but we don't use it so YMMV - look at Areca and 3Ware. Managability? Good web management interfaces with 6+ years of development from 3Ware, e-mail, online rebuild options, all the goodies. No "snapshot" or offline backup features like the high-end SANs, but do you really need it? Need more power or storage over time? Run a parallel DB like Bizgres MPP, you can add more servers with internal storage and increase your I/O, CPU and memory. - Luke ---------------------------(end of broadcast)--------------------------- TIP 5: don't forget to increase your free space map settings
Physical using xfs on Linux. - Luke -------------------------- Sent from my BlackBerry Wireless Device -----Original Message----- From: Anjan Dave <adave@vantage.com> To: Luke Lonergan <LLonergan@greenplum.com>; Charles Sprickman <spork@bway.net>; pgsql-performance@postgresql.org <pgsql-performance@postgresql.org> Sent: Thu Dec 15 16:13:04 2005 Subject: RE: [PERFORM] SAN/NAS options Luke, How did you measure 800MB/sec, is it cached, or physical I/O? -anjan -----Original Message----- From: Luke Lonergan [mailto:LLonergan@greenplum.com] Sent: Wednesday, December 14, 2005 2:10 AM To: Charles Sprickman; pgsql-performance@postgresql.org Subject: Re: [PERFORM] SAN/NAS options Charles, > Lastly, one thing that I'm not yet finding in trying to > educate myself on SANs is a good overview of what's come out > in the past few years that's more affordable than the old > big-iron stuff. For example I saw some brief info on this > list's archives about the Dell/EMC offerings. Anything else > in that vein to look at? My two cents: SAN is a bad investment, go for big internal storage. The 3Ware or Areca SATA RAID adapters kick butt and if you look in the newest colos (I was just in ours "365main.net" today), you will see rack on rack of machines with from 4 to 16 internal SATA drives. Are they all DB servers? Not necessarily, but that's where things are headed. You can get a 3U server with dual opteron 250s, 16GB RAM and 16x 400GB SATAII drives with the 3Ware 9550SX controller for $10K - we just ordered 4 of them. I don't think you can buy an external disk chassis and a Fibre channel NIC for that. Performance? 800MB/s RAID5 reads, 400MB/s RAID5 writes. Random IOs are also very high for RAID10, but we don't use it so YMMV - look at Areca and 3Ware. Managability? Good web management interfaces with 6+ years of development from 3Ware, e-mail, online rebuild options, all the goodies. No "snapshot" or offline backup features like the high-end SANs, but do you really need it? Need more power or storage over time? Run a parallel DB like Bizgres MPP, you can add more servers with internal storage and increase your I/O, CPU and memory. - Luke ---------------------------(end of broadcast)--------------------------- TIP 5: don't forget to increase your free space map settings
On Wed, Dec 14, 2005 at 08:28:56PM +1300, Mark Kirkwood wrote: > Another interesting thing to try is rebuilding the database ufs > filesystem(s) with 32K blocks and 4K frags (as opposed to 8K/1K or > 16K/2K - can't recall the default on 4.x). I found this to give a factor > of 2 speedup on random disk access (specifically queries doing indexed > joins). Even if you're doing a lot of random IO? I would think that random IO would perform better if you use smaller (8K) blocks, since there's less data being read in and then just thrown away that way. > Is it mainly your 2 disk machines that are IOPS bound? if so, a cheap > option may be to buy 2 more cheetahs for them! If it's the 4's, well how > about a 2U U320 diskpack from whomever supplies you the Supermicro boxes? Also, on the 4 drive machines if you can spare the room you might see a big gain by putting the tables on one mirror and the OS and transaction logs on the other. -- Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com Pervasive Software http://pervasive.com work: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461
On Wed, Dec 14, 2005 at 01:56:10AM -0500, Charles Sprickman wrote: You'll note that I'm being somewhat driven by my OS of choice, FreeBSD. > Unlike Solaris or other commercial offerings, there is no nice volume > management available. While I'd love to keep managing a dozen or so > FreeBSD boxes, I could be persuaded to go to Solaris x86 if the volume > management really shines and Postgres performs well on it. Have you looked at vinum? It might not qualify as a true volume manager, but it's still pretty handy. -- Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com Pervasive Software http://pervasive.com work: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461
Jim C. Nasby wrote: > On Wed, Dec 14, 2005 at 08:28:56PM +1300, Mark Kirkwood wrote: > >>Another interesting thing to try is rebuilding the database ufs >>filesystem(s) with 32K blocks and 4K frags (as opposed to 8K/1K or >>16K/2K - can't recall the default on 4.x). I found this to give a factor >>of 2 speedup on random disk access (specifically queries doing indexed >>joins). > > > Even if you're doing a lot of random IO? I would think that random IO > would perform better if you use smaller (8K) blocks, since there's less > data being read in and then just thrown away that way. > > Yeah, that's what I would have expected too! but the particular queries I tested do a ton of random IO (correlation of 0.013 on the join column for the big table). I did wonder if the gain has something to do with the underlying RAID stripe size (64K or 256K in my case), as I have only tested the 32K vs 8K/16K on RAIDed systems. I guess for a system where the number of concurrent users give rise to memory pressure, it will cause more thrashing of the file buffer cache, much could be a net loss. Still worth trying out I think, you will know soon enough if it is a win or lose! Note that I did *not* alter Postgres page/block size (BLCKSZ) from 8K, so no dump/reload is required to test this out. cheers Mark
On Fri, Dec 16, 2005 at 04:18:01PM -0600, Jim C. Nasby wrote: >Even if you're doing a lot of random IO? I would think that random IO >would perform better if you use smaller (8K) blocks, since there's less >data being read in and then just thrown away that way. The overhead of reading an 8k block instead of a 32k block is too small to measure on modern hardware. The seek is what dominates; leaving the read head on a little longer and then transmitting a little more over a 200 megabyte channel is statistical fuzz. Mike Stone
On Fri, Dec 16, 2005 at 05:51:03PM -0500, Michael Stone wrote: > On Fri, Dec 16, 2005 at 04:18:01PM -0600, Jim C. Nasby wrote: > >Even if you're doing a lot of random IO? I would think that random IO > >would perform better if you use smaller (8K) blocks, since there's less > >data being read in and then just thrown away that way. > > The overhead of reading an 8k block instead of a 32k block is too small > to measure on modern hardware. The seek is what dominates; leaving the > read head on a little longer and then transmitting a little more over a > 200 megabyte channel is statistical fuzz. True, but now you've got 4x the amount of data in your cache that you probably don't need. Looks like time to do some benchmarking... -- Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com Pervasive Software http://pervasive.com work: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461
On Fri, Dec 16, 2005 at 06:25:25PM -0600, Jim C. Nasby wrote: >True, but now you've got 4x the amount of data in your cache that you >probably don't need. Or you might be 4x more likely to have data cached that's needed later. If you're hitting disk either way, that's probably more likely than the extra IO pushing something critical out--if *all* the important stuff were cached you wouldn't be doing the seeks in the first place. This will obviously be heavily dependent on the amount of ram you've got and your workload, so (as always) you'll have to benchmark it to get past the hand-waving stage. Mike Stone
Jim C. Nasby wrote: > On Wed, Dec 14, 2005 at 01:56:10AM -0500, Charles Sprickman wrote: > You'll note that I'm being somewhat driven by my OS of choice, FreeBSD. > >>Unlike Solaris or other commercial offerings, there is no nice volume >>management available. While I'd love to keep managing a dozen or so >>FreeBSD boxes, I could be persuaded to go to Solaris x86 if the volume >>management really shines and Postgres performs well on it. > > > Have you looked at vinum? It might not qualify as a true volume manager, > but it's still pretty handy. I am looking very closely at purchasing a SANRAD Vswitch 2000, a Nexsan SATABoy with SATA disks, and the Qlogic iscsi controller cards. Nexsan claims up to 370MB/s sustained per controller and 44,500 IOPS but I'm not sure if that is good or bad. It's certainly faster than the LSI megaraid controller I'm using now with a raid 1 mirror. The sanrad box looks like it saves money in that you don't have to by controller cards for everything, but for I/O intensive servers such as the database server, I would end up buying an iscsi controller card anyway. At this point I'm not sure what the best solution is. I like the idea of having logical disks available though iscsi because of how flexible it is, but I really don't want to spend $20k (10 for the nexsan and 10 for the sanrad) and end up with poor performance. On other advantage to iscsi is that I can go completely diskless on my servers and boot from iscsi which means that I don't have to have spare disks for each host, now I just have spare disks for the nexsan chassis. So the question becomes: has anyone put postgres on an iscsi san, and if so how did it perform? schu
Usually manufacturer's claims are tested in 'ideal' conditions, it may not translate well on bandwidth seen on the host side.A 2Gbps Fiber Channel connection would (ideally) give you about 250MB/sec per HBA. Not sure how it translates for GigEconsidering scsi protocol overheads, but you may want to confirm from them how they achieved 370MB/sec (hwo many iSCSIcontrollers, what file system, how many drives, what RAID type, block size, strip size, cache settings, etc), and whetherit was physical I/O or cached. In other words, if someone has any benchmark numbers, that would be helpful. Regarding diskless iscsi boots for future servers, remember that it's a shared storage, if you have a busy server attachedto your Nexsan, you may have to think twice on sharing the performance (throughput and IOPS of the storage controller)without impacting the existing hosts, unless you are zizing it now. And you want to have a pretty clean GigE network, more or less dedicated to this block traffic. Large internal storage with more memory and AMD CPUs is an option as Luke had originally suggested. Check out Appro as well. I'd also be curious to know if someone has been using this (SATA/iSCSI/SAS) solution and what are some I/O numbers observed. Thanks, Anjan -----Original Message----- From: Matthew Schumacher [mailto:matt.s@aptalaska.net] Sent: Mon 12/19/2005 7:41 PM To: pgsql-performance@postgresql.org Cc: Subject: Re: [PERFORM] SAN/NAS options Jim C. Nasby wrote: > On Wed, Dec 14, 2005 at 01:56:10AM -0500, Charles Sprickman wrote: > You'll note that I'm being somewhat driven by my OS of choice, FreeBSD. > >>Unlike Solaris or other commercial offerings, there is no nice volume >>management available. While I'd love to keep managing a dozen or so >>FreeBSD boxes, I could be persuaded to go to Solaris x86 if the volume >>management really shines and Postgres performs well on it. > > > Have you looked at vinum? It might not qualify as a true volume manager, > but it's still pretty handy. I am looking very closely at purchasing a SANRAD Vswitch 2000, a Nexsan SATABoy with SATA disks, and the Qlogic iscsi controller cards. Nexsan claims up to 370MB/s sustained per controller and 44,500 IOPS but I'm not sure if that is good or bad. It's certainly faster than the LSI megaraid controller I'm using now with a raid 1 mirror. The sanrad box looks like it saves money in that you don't have to by controller cards for everything, but for I/O intensive servers such as the database server, I would end up buying an iscsi controller card anyway. At this point I'm not sure what the best solution is. I like the idea of having logical disks available though iscsi because of how flexible it is, but I really don't want to spend $20k (10 for the nexsan and 10 for the sanrad) and end up with poor performance. On other advantage to iscsi is that I can go completely diskless on my servers and boot from iscsi which means that I don't have to have spare disks for each host, now I just have spare disks for the nexsan chassis. So the question becomes: has anyone put postgres on an iscsi san, and if so how did it perform? schu ---------------------------(end of broadcast)--------------------------- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Following up to myself again... On Wed, 14 Dec 2005, Charles Sprickman wrote: > Hello all, > > Supermicro 1U w/SCA backplane and 4 bays > 2x2.8 GHz Xeons > Adaptec 2015S "zero channel" RAID card I don't want to throw away the four machines like that that we have. I do want to throw away the ZCR cards... :) If I ditch those I still have a 1U box with a U320 scsi plug on the back. I'm vaguely considering pairing these two devices: http://www.areca.us/products/html/products.htm That's an Areca 16 channel SATA II (I haven't even read up on what's new in SATA II) RAID controller with an optional U320 SCSI daughter card to connect to the host(s). http://www.chenbro.com.tw/Chenbro_Special/RM321.php How can I turn that box down? Those people in the picture look very excited about it? Seriously though, it looks like an interesting and economical pairing that gives me most of what I'm looking for: -a modern RAID engine -small form factor -remote management of the array -ability to reuse my current db hosts that are disk-bound Disadvantages: -only 1 or 2 hosts per box -more difficult to move storage from host to host (compared to a SAN or NAS system) -no fancy NetApp features like snapshots -I have no experience with Areca SATA->SCSI RAID controllers Any thoughts on this? The controller looks to be about $1500, the enclosure about $400, and the drives are no great mystery, cost would depend on what total capacity I'm looking for. Our initial plan is to set one up for storage for a mail archive project, and to also have a host use this storage to host replicated copies of all Postgres databases. If things look good, we'd start moving our main PG hosts to use a similar RAID box. Thanks, Charles
Charles, On 1/14/06 6:37 PM, "Charles Sprickman" <spork@bway.net> wrote: > I'm vaguely considering pairing these two devices: > > http://www.areca.us/products/html/products.htm > > That's an Areca 16 channel SATA II (I haven't even read up on what's new > in SATA II) RAID controller with an optional U320 SCSI daughter card to > connect to the host(s). I'm confused - SATA with a SCSI daughter card? Where does the SCSI go? The Areca has a number (8,12,16) of single drive attach SATA ports coming out of it, each of which will go to a disk drive connection on the backplane. > http://www.chenbro.com.tw/Chenbro_Special/RM321.php > > How can I turn that box down? Those people in the picture look very > excited about it? Seriously though, it looks like an interesting and > economical pairing that gives me most of what I'm looking for: What a picture! I'm totally enthusiastic all of a sudden! I'm putting !!! at the end of every sentence! We just bought 4 very similar systems that use the chassis from California Design - our latest favorite source: http://www.asacomputers.com/ They did an excellent job of setting the systems up, with proper labeling and Quality Control. They also installed Fedora Core 4 and set up the filesystems, the only mistake they made was that they didn't enable 2TB clipping so that we had to rebuild the RAIDs (and install CentOS with the xfs filesystem). We paid $10.4K each for 16x 400GB WD RE2 SATA II drives, 16GB RAM and two Opteron 250s. We also put a single 200GB SATA system drive into each. RAID card is the 3Ware 9550SX. Performance has been stunning - we're getting 800MB/s sustained I/O throughput using the two 9550SX controllers in parallel. > Any thoughts on this? The controller looks to be about $1500, the > enclosure about $400, and the drives are no great mystery, cost would > depend on what total capacity I'm looking for. I'd get ASA to build it for you - use the Tyan 2882 series motherboard for greatest stablity. They may try to sell you hard on the SuperMicro boards, we've had less luck with them. > Our initial plan is to set one up for storage for a mail archive project, > and to also have a host use this storage to host replicated copies of all > Postgres databases. If things look good, we'd start moving our main PG > hosts to use a similar RAID box. Good approach. I'm personally spending as much time using these machines as I can - they are the fastest I've been on in a *long* time. - Luke
> Following up to myself again... > > On Wed, 14 Dec 2005, Charles Sprickman wrote: > >> Hello all, >> >> Supermicro 1U w/SCA backplane and 4 bays >> 2x2.8 GHz Xeons >> Adaptec 2015S "zero channel" RAID card > > I don't want to throw away the four machines like that that we have. > I do want to throw away the ZCR cards... :) If I ditch those I still > have a 1U box with a U320 scsi plug on the back. > > I'm vaguely considering pairing these two devices: > http://www.areca.us/products/html/products.htm > http://www.chenbro.com.tw/Chenbro_Special/RM321.php > How can I turn that box down? Those people in the picture look very > excited about it? Seriously though, it looks like an interesting and > economical pairing that gives me most of what I'm looking for: The combination definitely looks attractive. I have only been hearing positive things about the Areca cards; the overall combination sounds pretty attractive. > Disadvantages: > > -only 1 or 2 hosts per box > -more difficult to move storage from host to host (compared to a SAN > or NAS system) > -no fancy NetApp features like snapshots > -I have no experience with Areca SATA->SCSI RAID controllers > > Any thoughts on this? The controller looks to be about $1500, the > enclosure about $400, and the drives are no great mystery, cost would > depend on what total capacity I'm looking for. Another "usage model" that could be appropriate would be ATA-over-Ethernet... <http://en.wikipedia.org/wiki/ATA-over-Ethernet> > Our initial plan is to set one up for storage for a mail archive > project, and to also have a host use this storage to host replicated > copies of all Postgres databases. If things look good, we'd start > moving our main PG hosts to use a similar RAID box. We're thinking about some stuff like this to host things that require bulky amounts of disk that are otherwise "not high TPC" sorts of apps. This is definitely not a "gold plated" answer, compared to the NetApp and EMC boxes of the world, but can be useful in contexts where they are too expensive. -- let name="cbbrowne" and tld="gmail.com" in String.concat "@" [name;tld];; http://linuxdatabases.info/info/x.html It is usually a good idea to put a capacitor of a few microfarads across the output, as shown.
On Sat, Jan 14, 2006 at 09:37:01PM -0500, Charles Sprickman wrote: > Following up to myself again... > > On Wed, 14 Dec 2005, Charles Sprickman wrote: > > >Hello all, > > > >Supermicro 1U w/SCA backplane and 4 bays > >2x2.8 GHz Xeons > >Adaptec 2015S "zero channel" RAID card > > I don't want to throw away the four machines like that that we have. I do > want to throw away the ZCR cards... :) If I ditch those I still have a 1U > box with a U320 scsi plug on the back. > > I'm vaguely considering pairing these two devices: > > http://www.areca.us/products/html/products.htm > > That's an Areca 16 channel SATA II (I haven't even read up on what's new > in SATA II) RAID controller with an optional U320 SCSI daughter card to > connect to the host(s). > > http://www.chenbro.com.tw/Chenbro_Special/RM321.php Not sure how significant, but the RM321 backplane claims to support SATA 150 (aka SATA I) only. -Mike