Thread: Looking for a cheap upgrade (RAID)
I have a server on a standard pc right now. PIII 700, 1Gig ram (SD), 40 Gig IDE, RedHat 8.0, PostgreSQL 7.3.1 The database has 3 tables that just broke 10 million tuples (yeah, i think im entering in to the world of real databases ;-) Its primarly bulk (copy) inserts and queries, rarely an update. I am looking at moving this to a P4 2.4G, 2 Gig Ram(DDR), RedHat 8, PostgreSQL 7.3.latest My primary reason for posting this is to help filter through the noise, and get me pointed in the right direction. I realize that Im a raid on linux newbie so any suggestions are appreciated. Im thinking I want to put this on an IDE Raid array, probably 0+1. IDE seems to be cheap and effective these days. What ive been able to glean from other postings is that I should have 3 drives, 2 for the database w/ striping and another for the WAL. Am I way off base here? I would also appreciate raid hardware suggestions (brands, etc) And as always im not afraid to RTFM if someone can point me to the FM :-) Cost seems to be quite a high priority, I'm getting pretty good at making something out of nothing for everyone :) TIA for any suggestions. Chad
Chad, > I realize that Im a raid on linux newbie so any suggestions are appreciated. > Im thinking I want to put this on an IDE Raid array, probably 0+1. IDE seems > to be cheap and effective these days. > What ive been able to glean from other postings is that I should have 3 > drives, 2 for the database w/ striping and another for the WAL. Well, RAID 0+1 is only relevant if you have more than 2 drives. Otherwise, it's just RAID 1 (which is a good choice for PostgreSQL). More disks is almost always better. Putting WAL on a seperate (non-RAID) disk is usually a very good idea. > I would also appreciate raid hardware suggestions (brands, etc) > And as always im not afraid to RTFM if someone can point me to the FM :-) Use Linux Software RAID. To get hardware RAID better than Linux Software RAID, you have to spend $800 or more. -- -Josh Berkus Aglio Database Solutions San Francisco
On Fri, 2 May 2003, Chad Thompson wrote: > I have a server on a standard pc right now. > PIII 700, 1Gig ram (SD), 40 Gig IDE, RedHat 8.0, PostgreSQL 7.3.1 > > The database has 3 tables that just broke 10 million tuples (yeah, i think > im entering in to the world of real databases ;-) > Its primarly bulk (copy) inserts and queries, rarely an update. > > I am looking at moving this to a P4 2.4G, 2 Gig Ram(DDR), RedHat 8, > PostgreSQL 7.3.latest > > My primary reason for posting this is to help filter through the noise, and > get me pointed in the right direction. > > I realize that Im a raid on linux newbie so any suggestions are appreciated. > Im thinking I want to put this on an IDE Raid array, probably 0+1. IDE seems > to be cheap and effective these days. > What ive been able to glean from other postings is that I should have 3 > drives, 2 for the database w/ striping and another for the WAL. > Am I way off base here? > I would also appreciate raid hardware suggestions (brands, etc) > And as always im not afraid to RTFM if someone can point me to the FM :-) > > Cost seems to be quite a high priority, I'm getting pretty good at making > something out of nothing for everyone :) My experience has been that with IDEs, RAID-5 is pretty good (85% the performance of RAID-1 in real use) X+0 in linux kernel (2.4.7 is what I tested, no idea on the newer kernel versions) is no faster than X where X is 1 or 5. I think there are parallel issues with stacking with linux software kernel arrays. That said, their performance in stock RAID1 and RAID5 configurations is quite good. If your writes happen during off hours, or only account for a small portion of your IO then a seperate drive is not gonna win you much, it's a heavily written environment that will gain from that.
Can WAL and the swap partition be on the same drive? Thanks Chad ----- Original Message ----- From: "Josh Berkus" <josh@agliodbs.com> To: "Chad Thompson" <chad@weblinkservices.com>; "pgsql-performance" <pgsql-performance@postgresql.org> Sent: Friday, May 02, 2003 2:10 PM Subject: Re: [PERFORM] Looking for a cheap upgrade (RAID) Chad, > I realize that Im a raid on linux newbie so any suggestions are appreciated. > Im thinking I want to put this on an IDE Raid array, probably 0+1. IDE seems > to be cheap and effective these days. > What ive been able to glean from other postings is that I should have 3 > drives, 2 for the database w/ striping and another for the WAL. Well, RAID 0+1 is only relevant if you have more than 2 drives. Otherwise, it's just RAID 1 (which is a good choice for PostgreSQL). More disks is almost always better. Putting WAL on a seperate (non-RAID) disk is usually a very good idea. > I would also appreciate raid hardware suggestions (brands, etc) > And as always im not afraid to RTFM if someone can point me to the FM :-) Use Linux Software RAID. To get hardware RAID better than Linux Software RAID, you have to spend $800 or more. -- -Josh Berkus Aglio Database Solutions San Francisco
Seeing as you'll have 2 gigs of RAM, your swap partition is likely to grow cob webs, so where you put it probably isn't that critical. What I usually do is say take 4 120 Gig drives, allocate 1 gig on each for swap, so you have 4 gigs swap (your swap should be larger than available memory in Linux for performance reasons) and the rest of the drives split so that say, the first 5 or so gigs of each is used to house most of the OS, and the rest for another RAID array hosting the database. Since the root partition can't be on RAID5, you'd have to set up either a single drive or a mirror set to handle that. With that setup, you'd have 15 Gigs for the OS, 4 gigs for swap, and about 300 gigs for the database. The nice thing about RAID 5 is that random read performance for parallel load gets better as you add drives. Write performance gets a little better with more drives since it's likely that the drives you're writing to aren't the same ones being read. Since your swap os likely to never see much use, except for offline storage of long running processes that haven't been accessed recently, it's probably fine to put them on the same drive, but honestly, I've not found a great increase from drive configuration under IDE. With SCSI, rearranging can make a bigger difference, maybe it's the better buss design, i don't know for sure. Test them if you have the time now, you won't get to take apart a working machine after it's up to test it. :) On Fri, 2 May 2003, Chad Thompson wrote: > Can WAL and the swap partition be on the same drive? > > Thanks > Chad > ----- Original Message ----- > From: "Josh Berkus" <josh@agliodbs.com> > To: "Chad Thompson" <chad@weblinkservices.com>; "pgsql-performance" > <pgsql-performance@postgresql.org> > Sent: Friday, May 02, 2003 2:10 PM > Subject: Re: [PERFORM] Looking for a cheap upgrade (RAID) > > > Chad, > > > I realize that Im a raid on linux newbie so any suggestions are > appreciated. > > Im thinking I want to put this on an IDE Raid array, probably 0+1. IDE > seems > > to be cheap and effective these days. > > What ive been able to glean from other postings is that I should have 3 > > drives, 2 for the database w/ striping and another for the WAL. > > Well, RAID 0+1 is only relevant if you have more than 2 drives. Otherwise, > it's just RAID 1 (which is a good choice for PostgreSQL). > > More disks is almost always better. Putting WAL on a seperate (non-RAID) > disk > is usually a very good idea. > > > I would also appreciate raid hardware suggestions (brands, etc) > > And as always im not afraid to RTFM if someone can point me to the FM :-) > > Use Linux Software RAID. To get hardware RAID better than Linux Software > RAID, you have to spend $800 or more. > > > -- > -Josh Berkus > Aglio Database Solutions > San Francisco > > > ---------------------------(end of broadcast)--------------------------- > TIP 5: Have you checked our extensive FAQ? > > http://www.postgresql.org/docs/faqs/FAQ.html >
Scott, > With that setup, you'd have 15 Gigs for the OS, 4 gigs for swap, and about > 300 gigs for the database. The nice thing about RAID 5 is that random > read performance for parallel load gets better as you add drives. Write > performance gets a little better with more drives since it's likely that > the drives you're writing to aren't the same ones being read. Yeah, but I've found with relatively few drives (such as the minimum of 3) that RAID 5 performance is considerably worse for writes than RAID 1 -- as bad as 30-40% of the speed of a raw SCSI disk. This problem goes away with more disks, of course. -- -Josh Berkus ______AGLIO DATABASE SOLUTIONS___________________________ Josh Berkus Complete information technology josh@agliodbs.com and data management solutions (415) 565-7293 for law firms, small businesses fax 621-2533 and non-profit organizations. San Francisco
On Fri, 2 May 2003, Josh Berkus wrote: > Scott, > > > With that setup, you'd have 15 Gigs for the OS, 4 gigs for swap, and about > > 300 gigs for the database. The nice thing about RAID 5 is that random > > read performance for parallel load gets better as you add drives. Write > > performance gets a little better with more drives since it's likely that > > the drives you're writing to aren't the same ones being read. > > Yeah, but I've found with relatively few drives (such as the minimum of 3) > that RAID 5 performance is considerably worse for writes than RAID 1 -- as > bad as 30-40% of the speed of a raw SCSI disk. This problem goes away with > more disks, of course. Yeah, My RAID test box is an old dual PPro 200 with 6 to 8 2 gig drives in it and on two seperate scsi channels. It's truly amazing how much better RAID5 is when you get that many drives together. OF course, RAID 0 on that setup really flies. :-0 I'd have to say if you're only gonna need 50 or so gigs max, then a RAID1 is much easier to configure, and with a hot spare is very reliable.
On Friday 02 May 2003 16:10, Josh Berkus wrote: > More disks is almost always better. Putting WAL on a seperate (non-RAID) > disk is usually a very good idea. From a performance POV perhaps. The subject came up on hackers recently and it was pointed out that if you use RAID for reliability and redundancy rather than for performance, you need to keep the WAL files on the RAID too. -- D'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves http://www.druid.net/darcy/ | and a sheep voting on +1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.
On Saturday 03 May 2003 02:50, scott.marlowe wrote: > Seeing as you'll have 2 gigs of RAM, your swap partition is likely to grow > cob webs, so where you put it probably isn't that critical. > > What I usually do is say take 4 120 Gig drives, allocate 1 gig on each for > swap, so you have 4 gigs swap (your swap should be larger than available > memory in Linux for performance reasons) and the rest of the drives split > so that say, the first 5 or so gigs of each is used to house most of the > OS, and the rest for another RAID array hosting the database. Since the > root partition can't be on RAID5, you'd have to set up either a single > drive or a mirror set to handle that. Setting swap in linux is a tricky proposition. If there is no swap at all, linux has behaved crazily in past. These days situation is much better. In my experience with single IDE disk, if swap usage goes above 20-30MB due to shortage of memory, machine is dead in waters. Linux sometimes does memory inversion where swap used is half the free memory but swap is not freed but that does not hurt really.. So my advice is, setting swap more tahn 128MB is waste of disk space. OK 256 in ultra-extreme situations.. but more than that would a be unadvisable situation.. Shridhar
On Saturday 03 May 2003 13:27, D'Arcy J.M. Cain wrote: > On Friday 02 May 2003 16:10, Josh Berkus wrote: > > More disks is almost always better. Putting WAL on a seperate (non-RAID) > > disk is usually a very good idea. > > From a performance POV perhaps. The subject came up on hackers recently > and it was pointed out that if you use RAID for reliability and redundancy > rather than for performance, you need to keep the WAL files on the RAID > too. but for performance reason, that RAID can be separate from the data RAID..:-) Shridhar -- "Gee, Toto, I don't think we are in Kansas anymore."
On Fri, 2003-05-02 at 13:53, Chad Thompson wrote: > I have a server on a standard pc right now. > PIII 700, 1Gig ram (SD), 40 Gig IDE, RedHat 8.0, PostgreSQL 7.3.1 > > The database has 3 tables that just broke 10 million tuples (yeah, i think > im entering in to the world of real databases ;-) > Its primarly bulk (copy) inserts and queries, rarely an update. > > I am looking at moving this to a P4 2.4G, 2 Gig Ram(DDR), RedHat 8, > PostgreSQL 7.3.latest [snip] How big do you expect the database to get? If I may be a contrarian, if under 70GB, then why not just get a 72GB 10K RPM SCSI drive ($160) and a SCSI 160 card? OS, swap, input files, etc, can go on a 7200RPM IDE drive. Much fewer moving parts than RAID, so more reliable... -- +-----------------------------------------------------------+ | Ron Johnson, Jr. Home: ron.l.johnson@cox.net | | Jefferson, LA USA http://members.cox.net/ron.l.johnson | | | | An ad currently being run by the NEA (the US's biggest | | public school TEACHERS UNION) asks a teenager if he can | | find sodium and *chloride* in the periodic table of the | | elements. | | And they wonder why people think public schools suck... | +-----------------------------------------------------------+
On 3 May 2003, Ron Johnson wrote: > On Fri, 2003-05-02 at 13:53, Chad Thompson wrote: > > I have a server on a standard pc right now. > > PIII 700, 1Gig ram (SD), 40 Gig IDE, RedHat 8.0, PostgreSQL 7.3.1 > > > > The database has 3 tables that just broke 10 million tuples (yeah, i think > > im entering in to the world of real databases ;-) > > Its primarly bulk (copy) inserts and queries, rarely an update. > > > > I am looking at moving this to a P4 2.4G, 2 Gig Ram(DDR), RedHat 8, > > PostgreSQL 7.3.latest > [snip] > > How big do you expect the database to get? > > If I may be a contrarian, if under 70GB, then why not just get a 72GB > 10K RPM SCSI drive ($160) and a SCSI 160 card? OS, swap, input files, > etc, can go on a 7200RPM IDE drive. > > Much fewer moving parts than RAID, so more reliable... Sorry, everything else is true, but RAID is far more reliable, even if disk failure is more likely. Since a RAID array (1 or 5) can run with one dead disk, and supports auto-rebuild from hot spares, there's really no way a single disk can be more reliable. It may have fewer failures, but that's not the same thing.
On Sat, 3 May 2003, Shridhar Daithankar wrote: > On Saturday 03 May 2003 02:50, scott.marlowe wrote: > > Seeing as you'll have 2 gigs of RAM, your swap partition is likely to grow > > cob webs, so where you put it probably isn't that critical. > > > > What I usually do is say take 4 120 Gig drives, allocate 1 gig on each for > > swap, so you have 4 gigs swap (your swap should be larger than available > > memory in Linux for performance reasons) and the rest of the drives split > > so that say, the first 5 or so gigs of each is used to house most of the > > OS, and the rest for another RAID array hosting the database. Since the > > root partition can't be on RAID5, you'd have to set up either a single > > drive or a mirror set to handle that. > > Setting swap in linux is a tricky proposition. If there is no swap at all, > linux has behaved crazily in past. These days situation is much better. > > In my experience with single IDE disk, if swap usage goes above 20-30MB due to > shortage of memory, machine is dead in waters. Linux sometimes does memory > inversion where swap used is half the free memory but swap is not freed but > that does not hurt really.. > > So my advice is, setting swap more tahn 128MB is waste of disk space. OK 256 > in ultra-extreme situations.. but more than that would a be unadvisable > situation.. Whereas disks are ALL over 20 gigs now, and whereas the linux kernel will begin killing processes when it runs out of mem and swap, and whereas the linux kernel STILL has issues using swap when it's smaller than memory (those problems have been lessened, but not eliminated), and whereas the linux kernel will parallelize access to its swap partitions when it has more than one and they are at the same priority, providing better swap performance, and whereas REAL servers always use more memory than you'd ever thought they would, be it declared here and now by me that using a small swap file is penny-wise and pound foolish. :-) Seriously, though, having once had a REAL bad experience on a production server that I was trying to increase swap on (yes, some idiot set it up with some tiny little 64 Meg swap file (yes, that idiot was me...)) I now just give every server a few gigs of swap from its three or four 40+ gig drives. With 4 drives, and each one donating 256 Meg to the cause you can have a gig of swap space.
On Mon, 2003-05-05 at 11:31, scott.marlowe wrote: > On 3 May 2003, Ron Johnson wrote: > > > On Fri, 2003-05-02 at 13:53, Chad Thompson wrote: > > > I have a server on a standard pc right now. > > > PIII 700, 1Gig ram (SD), 40 Gig IDE, RedHat 8.0, PostgreSQL 7.3.1 > > > > > > The database has 3 tables that just broke 10 million tuples (yeah, i think > > > im entering in to the world of real databases ;-) > > > Its primarly bulk (copy) inserts and queries, rarely an update. > > > > > > I am looking at moving this to a P4 2.4G, 2 Gig Ram(DDR), RedHat 8, > > > PostgreSQL 7.3.latest > > [snip] > > > > How big do you expect the database to get? > > > > If I may be a contrarian, if under 70GB, then why not just get a 72GB > > 10K RPM SCSI drive ($160) and a SCSI 160 card? OS, swap, input files, > > etc, can go on a 7200RPM IDE drive. > > > > Much fewer moving parts than RAID, so more reliable... > > Sorry, everything else is true, but RAID is far more reliable, even if > disk failure is more likely. Since a RAID array (1 or 5) can run with one > dead disk, and supports auto-rebuild from hot spares, there's really no > way a single disk can be more reliable. It may have fewer failures, but > that's not the same thing. What controller do you use for IDE hot-swapping and auto-rebuild? 3Ware? -- +-----------------------------------------------------------+ | Ron Johnson, Jr. Home: ron.l.johnson@cox.net | | Jefferson, LA USA http://members.cox.net/ron.l.johnson | | | | An ad currently being run by the NEA (the US's biggest | | public school TEACHERS UNION) asks a teenager if he can | | find sodium and *chloride* in the periodic table of the | | elements. | | And they wonder why people think public schools suck... | +-----------------------------------------------------------+
On 5 May 2003, Ron Johnson wrote: > On Mon, 2003-05-05 at 11:31, scott.marlowe wrote: > > On 3 May 2003, Ron Johnson wrote: > > > > > On Fri, 2003-05-02 at 13:53, Chad Thompson wrote: > > > > I have a server on a standard pc right now. > > > > PIII 700, 1Gig ram (SD), 40 Gig IDE, RedHat 8.0, PostgreSQL 7.3.1 > > > > > > > > The database has 3 tables that just broke 10 million tuples (yeah, i think > > > > im entering in to the world of real databases ;-) > > > > Its primarly bulk (copy) inserts and queries, rarely an update. > > > > > > > > I am looking at moving this to a P4 2.4G, 2 Gig Ram(DDR), RedHat 8, > > > > PostgreSQL 7.3.latest > > > [snip] > > > > > > How big do you expect the database to get? > > > > > > If I may be a contrarian, if under 70GB, then why not just get a 72GB > > > 10K RPM SCSI drive ($160) and a SCSI 160 card? OS, swap, input files, > > > etc, can go on a 7200RPM IDE drive. > > > > > > Much fewer moving parts than RAID, so more reliable... > > > > Sorry, everything else is true, but RAID is far more reliable, even if > > disk failure is more likely. Since a RAID array (1 or 5) can run with one > > dead disk, and supports auto-rebuild from hot spares, there's really no > > way a single disk can be more reliable. It may have fewer failures, but > > that's not the same thing. > > What controller do you use for IDE hot-swapping and auto-rebuild? > 3Ware? Linux, and I don't do hot swapping with IDE, just hot rebuild from a spare drive. My servers are running SCSI, by the way, only the workstations are running IDE. With the saved cost of a decent RAID controller (good SCSI controllers are still well over $500 most the time) I can afford enough hot spares to never have to worry about changing one out during the day.
On Mon, 2003-05-05 at 17:22, scott.marlowe wrote: > On 5 May 2003, Ron Johnson wrote: > > > On Mon, 2003-05-05 at 11:31, scott.marlowe wrote: > > > On 3 May 2003, Ron Johnson wrote: > > > > > > > On Fri, 2003-05-02 at 13:53, Chad Thompson wrote: [snip] > > What controller do you use for IDE hot-swapping and auto-rebuild? > > 3Ware? > > Linux, and I don't do hot swapping with IDE, just hot rebuild from a > spare drive. My servers are running SCSI, by the way, only the > workstations are running IDE. With the saved cost of a decent RAID > controller (good SCSI controllers are still well over $500 most the time) > I can afford enough hot spares to never have to worry about changing one > out during the day. Ah, I guess that drives go out infrequently enough that shutting it down at night for a swap-out isn't all that onerous... What controller model do you use? -- +-----------------------------------------------------------+ | Ron Johnson, Jr. Home: ron.l.johnson@cox.net | | Jefferson, LA USA http://members.cox.net/ron.l.johnson | | | | An ad currently being run by the NEA (the US's biggest | | public school TEACHERS UNION) asks a teenager if he can | | find sodium and *chloride* in the periodic table of the | | elements. | | And they wonder why people think public schools suck... | +-----------------------------------------------------------+
On 6 May 2003, Ron Johnson wrote: > On Mon, 2003-05-05 at 17:22, scott.marlowe wrote: > > On 5 May 2003, Ron Johnson wrote: > > > > > On Mon, 2003-05-05 at 11:31, scott.marlowe wrote: > > > > On 3 May 2003, Ron Johnson wrote: > > > > > > > > > On Fri, 2003-05-02 at 13:53, Chad Thompson wrote: > [snip] > > > What controller do you use for IDE hot-swapping and auto-rebuild? > > > 3Ware? > > > > Linux, and I don't do hot swapping with IDE, just hot rebuild from a > > spare drive. My servers are running SCSI, by the way, only the > > workstations are running IDE. With the saved cost of a decent RAID > > controller (good SCSI controllers are still well over $500 most the time) > > I can afford enough hot spares to never have to worry about changing one > > out during the day. > > Ah, I guess that drives go out infrequently enough that shutting > it down at night for a swap-out isn't all that onerous... > > What controller model do you use? My preference is SymBIOS (LSI now) plain UW SCSI 160, but at work we use adaptec built in UW SCSI 160 on INTEL dual CPU motherboards. I've used RAID controllers in the past, but now I genuinely prefer linux's built in kernel level raid to most controllers, and the load on the server is <2% of one of the two CPUs, so it doesn't really slow anything else down. The performance is quite good, I can read raw at about 48 Megs a second from a pair of 10kRPM UWSCSI drives in a RAID1. These drives, individually can pump out about 25 megs a second individually.
On Tue, 2003-05-06 at 13:12, scott.marlowe wrote: > On 6 May 2003, Ron Johnson wrote: > > > On Mon, 2003-05-05 at 17:22, scott.marlowe wrote: > > > On 5 May 2003, Ron Johnson wrote: > > > > > > > On Mon, 2003-05-05 at 11:31, scott.marlowe wrote: > > > > > On 3 May 2003, Ron Johnson wrote: > > > > > > > > > > > On Fri, 2003-05-02 at 13:53, Chad Thompson wrote: > > [snip] > > > > What controller do you use for IDE hot-swapping and auto-rebuild? > > > > 3Ware? > > > > > > Linux, and I don't do hot swapping with IDE, just hot rebuild from a > > > spare drive. My servers are running SCSI, by the way, only the > > > workstations are running IDE. With the saved cost of a decent RAID > > > controller (good SCSI controllers are still well over $500 most the time) > > > I can afford enough hot spares to never have to worry about changing one > > > out during the day. > > > > Ah, I guess that drives go out infrequently enough that shutting > > it down at night for a swap-out isn't all that onerous... > > > > What controller model do you use? > > My preference is SymBIOS (LSI now) plain UW SCSI 160, but at work we use > adaptec built in UW SCSI 160 on INTEL dual CPU motherboards. I've used > RAID controllers in the past, but now I genuinely prefer linux's built in > kernel level raid to most controllers, and the load on the server is <2% > of one of the two CPUs, so it doesn't really slow anything else down. The > performance is quite good, I can read raw at about 48 Megs a second from a > pair of 10kRPM UWSCSI drives in a RAID1. These drives, individually can > pump out about 25 megs a second individually. Hmm, I'm confused (again)... I thought you liked IDE RAID, because of the price savings. -- +---------------------------------------------------------------+ | Ron Johnson, Jr. mailto:ron.l.johnson@cox.net | | Jefferson, LA USA http://members.cox.net/ron.l.johnson | | | | The purpose of the military isn't to pay your college tuition | | or give you a little extra income; it's to "kill people and | | break things". Surprisingly, not everyone understands that. | +---------------------------------------------------------------+
On 6 May 2003, Ron Johnson wrote: > On Tue, 2003-05-06 at 13:12, scott.marlowe wrote: > > On 6 May 2003, Ron Johnson wrote: > > > > > On Mon, 2003-05-05 at 17:22, scott.marlowe wrote: > > > > On 5 May 2003, Ron Johnson wrote: > > > > > > > > > On Mon, 2003-05-05 at 11:31, scott.marlowe wrote: > > > > > > On 3 May 2003, Ron Johnson wrote: > > > > > > > > > > > > > On Fri, 2003-05-02 at 13:53, Chad Thompson wrote: > > > [snip] > > > > > What controller do you use for IDE hot-swapping and auto-rebuild? > > > > > 3Ware? > > > > > > > > Linux, and I don't do hot swapping with IDE, just hot rebuild from a > > > > spare drive. My servers are running SCSI, by the way, only the > > > > workstations are running IDE. With the saved cost of a decent RAID > > > > controller (good SCSI controllers are still well over $500 most the time) > > > > I can afford enough hot spares to never have to worry about changing one > > > > out during the day. > > > > > > Ah, I guess that drives go out infrequently enough that shutting > > > it down at night for a swap-out isn't all that onerous... > > > > > > What controller model do you use? > > > > My preference is SymBIOS (LSI now) plain UW SCSI 160, but at work we use > > adaptec built in UW SCSI 160 on INTEL dual CPU motherboards. I've used > > RAID controllers in the past, but now I genuinely prefer linux's built in > > kernel level raid to most controllers, and the load on the server is <2% > > of one of the two CPUs, so it doesn't really slow anything else down. The > > performance is quite good, I can read raw at about 48 Megs a second from a > > pair of 10kRPM UWSCSI drives in a RAID1. These drives, individually can > > pump out about 25 megs a second individually. > > Hmm, I'm confused (again)... > > I thought you liked IDE RAID, because of the price savings. No, I was saying that software RAID is what I like. IDE or SCSI. I just use SCSI because it's on a server that happens to have come with some nice UW SCSI Drives. The discussion about the IDE RAID was about what someone else was using. I was just defending the use of it, as it is still a great value for RAID arrays, and let's face it, the slowest IDE RAID you can build with new parts is probably still faster than the fastest SCSI RAID arrays from less than a decade ago. Now with Serial ATA coming out, I expect a lot more servers to use it, and it looks like the drives made for serial ATA will come in server class versions (tested for longer life, greater heat resistance, etc...) On my little 2xPPro200 I have 6 2 gig UltraWide 80 MB/sec SCSI drives, and 2 80 gig DMA-33 drives, and the two 80 gig DMA-33 drives literally stomp the 6 2 gigs into the ground, no matter how I configure it, except at heavy parallel access (i.e. pgbench -c 20 -t 1000) where the extra spindle/head count makes a big difference. And even then, the SCSIs are only a tiny bit faster, say 10% or so.