Thread: server hardware recommendations (the archives are dead)
I know this question has been asked before. I have seen it in the archives. Unfortunately the archives are dead right now (any search will yield "no results") and I need to make some decisions. Can anyone give some general recommendations on hardware for a server running Linux (RH6 or 6.1) and PostgreSQL? I estimate that there will be about 2 GIG of data stored in the database initially, but this could grow as high as 5 GIG in the future. Since the db is not multithreaded, I assume that buying a dual processor board and two processors would not be helpful to performance. Like any database server, a lot of ram will be required for fast operation. I was thinking at a minimum 256 meg of ram. I also want to have the database run on a RAID 5 array for speed and fault tolerance. Any suggestions here for disk type, RAID scheme (software or hardware), controller type, etc.? Any rule of thumb on the "extra" disk space needed above raw storage space for PostgreSQL operations (temporary tables, vacuum issues, etc)? Any past experiences, benchmarks, guesses, or hearsay gladly accepted. Thanks for your help. - Adam ------------------- Adam Rossi President, PlatinumSolutions, Inc. adam.rossi@platinumsolutions.com http://www.platinumsolutions.com P.O. Box 31 Oakton, VA 22124 PH: 703.352.8576 FAX: 703.352.8577
On Wed, 15 Dec 1999, Adam Rossi wrote: > I know this question has been asked before. I have seen it in the archives. > Unfortunately the archives are dead right now (any search will yield "no > results") and I need to make some decisions. > > Can anyone give some general recommendations on hardware for a server > running Linux (RH6 or 6.1) and PostgreSQL? I estimate that there will be > about 2 GIG of data stored in the database initially, but this could grow as > high as 5 GIG in the future. > > Since the db is not multithreaded, I assume that buying a dual processor > board and two processors would not be helpful to performance. Like any actually, if you are going to have concurrent connections to the backend, in some circumstances, PostgreSQL will handle multi-processors better then others...process one runs on CPU0, process two runs on CPU1, etc... > database server, a lot of ram will be required for fast operation. I was > thinking at a minimum 256 meg of ram. I also want to have the database run > on a RAID 5 array for speed and fault tolerance. Any suggestions here for > disk type, RAID scheme (software or hardware), controller type, etc.? Any my preference tends to be software raid...whatever I've ever seen as far as hardware raid is concerned has been quite slower then software raid...and this is with high-end servers... Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy Systems Administrator @ hub.org primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
The Hermit Hacker wrote: > > On Wed, 15 Dec 1999, Adam Rossi wrote: > > > I know this question has been asked before. I have seen it in the archives. > > Unfortunately the archives are dead right now (any search will yield "no > > results") and I need to make some decisions. > > > > Can anyone give some general recommendations on hardware for a server > > running Linux (RH6 or 6.1) and PostgreSQL? I estimate that there will be > > about 2 GIG of data stored in the database initially, but this could grow as > > high as 5 GIG in the future. > > > > Since the db is not multithreaded, I assume that buying a dual processor > > board and two processors would not be helpful to performance. Like any > > actually, if you are going to have concurrent connections to the backend, > in some circumstances, PostgreSQL will handle multi-processors better then > others...process one runs on CPU0, process two runs on CPU1, etc... just to expand on this a little, it really depends on how the OS handles threads. at worst case, a multithreaded process might be restricted to running all its threads on a single CPU. i don't know of any OS's that do that, though. the best case for a multithreaded process (DB) would be that it could run threads on all processors simultaneously, which should be nearly equivalent to a multiprocess DB which should (assign the processes to the CPUs automatically), minus the overhead of each of the processes. then again, my understanding is that in linux, a thread is pretty much equivalent to a process anyway, so it would be a wash. to sum up, SMP & postgres is a good idea. > > > database server, a lot of ram will be required for fast operation. I was > > thinking at a minimum 256 meg of ram. I also want to have the database run > > on a RAID 5 array for speed and fault tolerance. Any suggestions here for > > disk type, RAID scheme (software or hardware), controller type, etc.? Any > > my preference tends to be software raid...whatever I've ever seen as far > as hardware raid is concerned has been quite slower then software > raid...and this is with high-end servers... i kind of question this, and here's why: i just set up a linux dual P3/256MB with 4 software raid 5 volumes and even loading data into one of the databases slows it to a crawl. i've been looking around because it seems absurd that the machine should slow down so much. i haven't really found any answers, but i have seen several places which told me that software raid under linux _isn't_ safe for multiprocessors & no place has told me for sure that it is. i am running kernel 2.2.12 & i'm not having any problems with it other than it slowing down to a crawl & that only happens when i'm loading data. right now i'm guessing the reason for this is that the raid5 daemon is such a high priority that it's sucking away all the CPU cycles from everything else to calculate parity information. like i said, i haven't found a lot of information about this, so if someone could confirm or explain what's happening, i'd appreciate it. assuming this is the case, software raid wouldn't be a big problem if you don't do a lot of heavy writing.
On Wed, 15 Dec 1999, Jeff Hoffmann wrote: > > my preference tends to be software raid...whatever I've ever seen as far > > as hardware raid is concerned has been quite slower then software > > raid...and this is with high-end servers... > > i kind of question this, and here's why: i just set up a linux dual > P3/256MB with 4 software raid 5 volumes and even loading data into one > of the databases slows it to a crawl. i've been looking around because > it seems absurd that the machine should slow down so much. i haven't > really found any answers, but i have seen several places which told me > that software raid under linux _isn't_ safe for multiprocessors & no > place has told me for sure that it is. What filesystem? I know (thank god) very little about Linux, but there have been comments here by some Linux folks (Thomas, wasn't it you?) that indicated that ext2fs sucks for this? Are you running with fsync() on or off? > appreciate it. assuming this is the case, software raid wouldn't be a > big problem if you don't do a lot of heavy writing. Most of my RAID tests are on Solaris+Disksuite...with good drives in the machine, my writes are something like 18MB/s to the drive, stripe'd and mirrored...I think reads worked out to be 19MB/s...(bad drives, same setup, same machine, same OS, were net'ng me something like 3MB/s...really killed performance *grin*) Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy Systems Administrator @ hub.org primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
> What filesystem? I know (thank god) very little about Linux, but > there have been comments here by some Linux folks (Thomas, wasn't it > you?) that indicated that ext2fs sucks for this? Are you running with > fsync() on or off? What is the problem with ext2fs? Is it just performance? or is there a serious chance for me losing data? Joost Roeleveld
On Wed, 15 Dec 1999, J. Roeleveld wrote: > > What filesystem? I know (thank god) very little about Linux, but > > there have been comments here by some Linux folks (Thomas, wasn't it > > you?) that indicated that ext2fs sucks for this? Are you running with > > fsync() on or off? > > What is the problem with ext2fs? Is it just performance? or is there a > serious chance for me losing data? That, I do not know...this is just something I recall from previous discussions... Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy Systems Administrator @ hub.org primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
At 08:41 AM 12/15/99 -0500, Adam Rossi wrote: >I know this question has been asked before. I have seen it in the archives. >Unfortunately the archives are dead right now (any search will yield "no >results") and I need to make some decisions. > >Can anyone give some general recommendations on hardware for a server >running Linux (RH6 or 6.1) and PostgreSQL? I estimate that there will be >about 2 GIG of data stored in the database initially, but this could grow as >high as 5 GIG in the future. > >Since the db is not multithreaded, I assume that buying a dual processor >board and two processors would not be helpful to performance. Like any >database server, a lot of ram will be required for fast operation. I was >thinking at a minimum 256 meg of ram. I also want to have the database run >on a RAID 5 array for speed and fault tolerance. Any suggestions here for >disk type, RAID scheme (software or hardware), controller type, etc.? Any >rule of thumb on the "extra" disk space needed above raw storage space for >PostgreSQL operations (temporary tables, vacuum issues, etc)? > >Any past experiences, benchmarks, guesses, or hearsay gladly accepted. >Thanks for your help. I think you may want to consider a software based volume manager (e.g. Veritas) and run a stripped mirror (RAID 10) rather than RAID 5. This will help out on writes. I'm not into RH, but surely they must offer something comparable to FreeBSD's Vinum. If not, I know Debian has one available. Ciao-- Ken http://www.y2know.org/safari Failure is not an option. It comes bundled with your Microsoft product.
On Wed, Dec 15, 1999 at 11:27:36AM -0400, The Hermit Hacker wrote: > On Wed, 15 Dec 1999, Jeff Hoffmann wrote: > > > > my preference tends to be software raid...whatever I've ever seen as far > > > as hardware raid is concerned has been quite slower then software > > > raid...and this is with high-end servers... > > > > i kind of question this, and here's why: i just set up a linux dual > > P3/256MB with 4 software raid 5 volumes and even loading data into one > > of the databases slows it to a crawl. i've been looking around because > <snip> > > Most of my RAID tests are on Solaris+Disksuite...with good drives > in the machine, my writes are something like 18MB/s to the drive, stripe'd > and mirrored...I think reads worked out to be 19MB/s...(bad drives, same Ah, this would be a RAID 0+1 setup, then? Very different from Jeff's RAID 5 configuration. I'd be willing to believe that software RAID 0+1 _could_ be faster than most hardware (it's just shuffling and dupping blocks around to different drives, which could be done with clever pointer twiddling) but calculating parity bits in hardware for RAID 5 had got to be a win, doesn't it? As it turns out, I'm speccing a similar machine right now, myself, and I've been running into statements like yours re: software RAID that surprised me. > setup, same machine, same OS, were net'ng me something like 3MB/s...really > killed performance *grin*) Hmm, bad drives as in broken, or slow? Ross -- Ross J. Reedstrom, Ph.D., <reedstrm@rice.edu> NSBRI Research Scientist/Programmer Computer and Information Technology Institute Rice University, 6100 S. Main St., Houston, TX 77005
On Wed, 15 Dec 1999, Ross J. Reedstrom wrote: > Ah, this would be a RAID 0+1 setup, then? Very different from Jeff's RAID > 5 configuration. I'd be willing to believe that software RAID 0+1 _could_ > be faster than most hardware (it's just shuffling and dupping blocks > around to different drives, which could be done with clever pointer > twiddling) but calculating parity bits in hardware for RAID 5 had got > to be a win, doesn't it? Oops, overlooked the RAID5 issue...sorry about that... > > setup, same machine, same OS, were net'ng me something like 3MB/s...really > > killed performance *grin*) > > Hmm, bad drives as in broken, or slow? Slow...they were bought brand new a year ago...Fujitsu's...even a single, non-stripe'd drive, was performing atrociously...replaced them with Seagates and the machine flies ... Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy Systems Administrator @ hub.org primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
"Ross J. Reedstrom" wrote: > > On Wed, Dec 15, 1999 at 11:27:36AM -0400, The Hermit Hacker wrote: > > On Wed, 15 Dec 1999, Jeff Hoffmann wrote: > > > > Most of my RAID tests are on Solaris+Disksuite...with good drives > > in the machine, my writes are something like 18MB/s to the drive, stripe'd > > and mirrored...I think reads worked out to be 19MB/s...(bad drives, same > > Ah, this would be a RAID 0+1 setup, then? Very different from Jeff's RAID > 5 configuration. I'd be willing to believe that software RAID 0+1 _could_ > be faster than most hardware (it's just shuffling and dupping blocks > around to different drives, which could be done with clever pointer > twiddling) but calculating parity bits in hardware for RAID 5 had got > to be a win, doesn't it? > i would assume that this would be the case. for anybody who is going to spec a new machine for a database as small as 3-5G, RAID 0+1 has got to be the choice. i don't have a doubt that it'd be reasonably fast with software raid. anymore, it'd be hard to buy new disks that small to build a 0+1 for < 8G (4x4G will give you 8G). when you're on a budget with a backup server that needs 20+ drives (n+1 for raid5) vs. 40+ drives (2N for 0+1), though, raid 5 is a good solution. doing it again, i'd go with a hardware controller since no one seems to be refuting my assumption that the raid5 daemon can suck up a lot of CPU when calculating parity, even with 2 fairly fast processors.
> > > database server, a lot of ram will be required for fast operation. I was > > > thinking at a minimum 256 meg of ram. I also want to have the database run > > > on a RAID 5 array for speed and fault tolerance. Any suggestions here for > > > disk type, RAID scheme (software or hardware), controller type, etc.? Any Obviously, memory needs vary according to what apps are running. On my linux RH6.0 system, I'm finding that pgsql backends take ~4Mb each. Seems like I need something near ~100Mb for overhead (X server-20Mb, netscape-40Mb, xterms-10@4Mb each). For apache with 30 child processes @ 25Mb each (mod_perl with lots of cached data and lots of modules), that quickly adds up to a gig of ram to avoid performance-killing swapping. > > my preference tends to be software raid...whatever I've ever seen as far > > as hardware raid is concerned has been quite slower then software > > raid...and this is with high-end servers... > Jeff Hoffman wrote: > ...i have seen several places which told me > that software raid under linux _isn't_ safe for multiprocessors & no > place has told me for sure that it is. Would you mind clarifying your understanding on which versions of linux are unsafe for software raid and how? Browsing deja, RH 6.1 (2.2.12-20smp) reportedly handles software raid without known problems (other than less than glowing documentation reviews), and I'm setting up pgsql/apache on such a system with software raid (no clear showstopping problems yet). Thanks, Ed Loehr
> Since the db is not multithreaded, I assume that buying a dual processor > board and two processors would not be helpful to performance. Like any If your disk subsystem can take the load, it does make sense. For every connection to PostgreSQL, a seperate backend is started, which is really a seperate process which so can run on a different processor. Maarten -- Maarten Boekhold, maarten.boekhold@tibcofinance.com TIBCO Finance Technology Inc. "Sevilla" Building Entrada 308 1096 ED Amsterdam, The Netherlands tel: +31 20 6601000 (direct: +31 20 6601066) fax: +31 20 6601005 http://www.tibcofinance.com
Ed Loehr wrote: > Jeff Hoffman wrote: > > > ...i have seen several places which told me > > that software raid under linux _isn't_ safe for multiprocessors & no > > place has told me for sure that it is. > > Would you mind clarifying your understanding on which versions of linux are unsafe > for software raid and how? Browsing deja, RH 6.1 (2.2.12-20smp) reportedly handles > software raid without known problems (other than less than glowing documentation > reviews), and I'm setting up pgsql/apache on such a system with software raid (no > clear showstopping problems yet). i'm not running RH6.1, but i am using a 2.2.12 kernel (compiled myself) and like i said, no catastrophic problems. the problems, i believe, came up with a conversation about an earlier 2.2 version of the kernel and it may have included problems with adaptec SCSI controllers in conjunction with software raid on SMP systems. i don't remember where i found the references, though. i believe the adaptec 7880 driver was fixed by 2.2.10, and there were a bunch of SMP fixes between 2.2.6 - 2.2.10, so 2.2.12 is probably as safe as anything. you never hear about things that work, just about things that don't.
Jeff Hoffmann wrote: > "Ross J. Reedstrom" wrote: > > > > On Wed, Dec 15, 1999 at 11:27:36AM -0400, The Hermit Hacker wrote: > > > On Wed, 15 Dec 1999, Jeff Hoffmann wrote: > > > > > > Most of my RAID tests are on Solaris+Disksuite...with good drives > > > in the machine, my writes are something like 18MB/s to the drive, stripe'd > > > and mirrored...I think reads worked out to be 19MB/s...(bad drives, same > > > > Ah, this would be a RAID 0+1 setup, then? Very different from Jeff's RAID > > 5 configuration. I'd be willing to believe that software RAID 0+1 _could_ > > be faster than most hardware (it's just shuffling and dupping blocks > > around to different drives, which could be done with clever pointer > > twiddling) but calculating parity bits in hardware for RAID 5 had got > > to be a win, doesn't it? > > > > i would assume that this would be the case. for anybody who is going to > spec a new machine for a database as small as 3-5G, RAID 0+1 has got to > be the choice. i don't have a doubt that it'd be reasonably fast with > software raid. anymore, it'd be hard to buy new disks that small to > build a 0+1 for < 8G (4x4G will give you 8G). when you're on a budget > with a backup server that needs 20+ drives (n+1 for raid5) vs. 40+ > drives (2N for 0+1), though, raid 5 is a good solution. doing it again, > i'd go with a hardware controller since no one seems to be refuting my > assumption that the raid5 daemon can suck up a lot of CPU when > calculating parity, even with 2 fairly fast processors. Of course, that's the real trick, isn't it? Hard drives are becoming so large, so fast, it's difficult to determine the proper RAID solution with the supplied budget. We wanted speed, not volume. So we wanted to build a software RAID 0+1 configuration as cheaply as possible with the fastest disks/controllers. We went with a multi-channel Ultra-2 Fast Wide Differential contoller (80MB/s) and 80MB/s LVD Cheetah drivers - the problem in building RAID 0+1 is that none of the drives come smaller these days then 9G ($450US), so a minimal RAID 0+1 configuration would be 4 drivers = 18G (2 for the stripe, 2 mirroring the stripe). That seems like major overkill for a database that NEEDS SPEED, but may only grow to a couple gig in size...It's too bad we couldn't buy 8 or 16 4G/2G 80MB/s at the proportional prices. Also, for what its worth, we've been running PostgreSQL on a dual 450Mhz SMP running just RAID 1 for about a year now without problems under RedHat 5.2 (2.0.36), although in those earlier kernel versions you have to rebuild the kernel with _SMP_ defined. It's pretty quick though... Mike Mascari
Jeff Hoffmann wrote: > Ed Loehr wrote: > > Jeff Hoffman wrote: > > > > > ...i have seen several places which told me > > > that software raid under linux _isn't_ safe for multiprocessors & no > > > place has told me for sure that it is. > > > > Would you mind clarifying your understanding on which versions of linux are unsafe > > for software raid and how? Browsing deja, RH 6.1 (2.2.12-20smp) reportedly handles > > software raid without known problems (other than less than glowing documentation > > reviews), and I'm setting up pgsql/apache on such a system with software raid (no > > clear showstopping problems yet). > > i'm not running RH6.1, but i am using a 2.2.12 kernel (compiled myself) > and like i said, no catastrophic problems. the problems, i believe, > came up with a conversation about an earlier 2.2 version of the kernel > and it may have included problems with adaptec SCSI controllers in > conjunction with software raid on SMP systems. i don't remember where i > found the references, though. i believe the adaptec 7880 driver was > fixed by 2.2.10, and there were a bunch of SMP fixes between 2.2.6 - > 2.2.10, so 2.2.12 is probably as safe as anything. you never hear about > things that work, just about things that don't. I can confirm the problems with the Adaptec controller on kernels such as 2.2.5 (RedHat 6.0) - regardless of whether or not you're running SMP. We lost data on a non-SMP box using the Adaptec 2940U2W LVD controller with 2.2.5. Our first move was to shut off RAID 1. I sent Doug Ledford a note on the issue (as I'm sure thousands have...), and have since upgraded to 2.2.9 and have run without problems....so far. The Adaptec controllers appear to be the controllers to avoid. Mike Mascari
> On Wed, 15 Dec 1999, Adam Rossi wrote: > > > I know this question has been asked before. I have seen it in the archives. > > Unfortunately the archives are dead right now (any search will yield "no > > results") and I need to make some decisions. > > > > Can anyone give some general recommendations on hardware for a server > > running Linux (RH6 or 6.1) and PostgreSQL? I estimate that there will be > > about 2 GIG of data stored in the database initially, but this could grow as > > high as 5 GIG in the future. > > > > Since the db is not multithreaded, I assume that buying a dual processor > > board and two processors would not be helpful to performance. Like any > > actually, if you are going to have concurrent connections to the backend, > in some circumstances, PostgreSQL will handle multi-processors better then > others...process one runs on CPU0, process two runs on CPU1, etc... I have removed multi-threaded from our comparison web page. It confused too many people. -- Bruce Momjian | http://www.op.net/~candle maillist@candle.pha.pa.us | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania 19026
[Charset iso-8859-1 unsupported, filtering to ASCII...] > > What filesystem? I know (thank god) very little about Linux, but > > there have been comments here by some Linux folks (Thomas, wasn't it > > you?) that indicated that ext2fs sucks for this? Are you running with > > fsync() on or off? > > What is the problem with ext2fs? Is it just performance? or is there a > serious chance for me losing data? The only comment made was something I said about raw devices on Linux. Someone said there is a raw device option for Linux and whether we wanted to try using it. I said most modern filesystems can move data at the speed of the disk, so raw devices really don't buy much. I mentioned that ext2 is not a modern file system. The *BSD filesystems are an example of a modern file system. This may be what Marc is remembering. Unfortunately, this does not relate to the user's question. (Raw devices do have advantages because of read-ahead control and disk flush control. However, we seem to be doing fine without these marginal improvements, and raw devices have a host of complex problems when implemented.) -- Bruce Momjian | http://www.op.net/~candle maillist@candle.pha.pa.us | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania 19026
On Wed, 15 Dec 1999, Bruce Momjian wrote: > [Charset iso-8859-1 unsupported, filtering to ASCII...] > > > What filesystem? I know (thank god) very little about Linux, but > > > there have been comments here by some Linux folks (Thomas, wasn't it > > > you?) that indicated that ext2fs sucks for this? Are you running with > > > fsync() on or off? > > > > What is the problem with ext2fs? Is it just performance? or is there a > > serious chance for me losing data? > > The only comment made was something I said about raw devices on Linux. Didn't Thomas make some comment,at one point, about not using PostgreSQL over ext2fs...I know he does use Linux, I just figured he was using a different fs then ext2s... Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy Systems Administrator @ hub.org primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
----- Original Message ----- From: The Hermit Hacker <scrappy@hub.org> To: Bruce Momjian <pgman@candle.pha.pa.us> Cc: J. Roeleveld <j.roeleveld@softhome.net>; pgsql-list <pgsql-general@hub.org> Sent: Thursday, 16 December 1999 15:10 Subject: Re: [GENERAL] server hardware recommendations (the archives aredead) > On Wed, 15 Dec 1999, Bruce Momjian wrote: > > > [Charset iso-8859-1 unsupported, filtering to ASCII...] > > > > What filesystem? I know (thank god) very little about Linux, but > > > > there have been comments here by some Linux folks (Thomas, wasn't it > > > > you?) that indicated that ext2fs sucks for this? Are you running with > > > > fsync() on or off? > > > > > > What is the problem with ext2fs? Is it just performance? or is there a > > > serious chance for me losing data? > > > > The only comment made was something I said about raw devices on Linux. > > Didn't Thomas make some comment,at one point, about not using PostgreSQL > over ext2fs...I know he does use Linux, I just figured he was using a > different fs then ext2s... > Ok, to stick my oar in this. ext2 is very fast and efficient. but it does not have journalling and is not extent based. Therefore it is not 'modern'. Irrelevant. Problem with postgresql and linux relate to the fsync() system call. On 2.2 and earlier this is surprisingly inefficient. This is related to the structuring of the buffer-cache and VFS layers in linux. This was rectified in 2.3.10 and later, with potential order-of-magnitude increases in performance, particularly with SMP systems. Raw devices have been a contentious issue in linux. Linus's position has been that the OS io should be so perfect that no application layer io system can hope to surpass it. Any situation otherwise is a flaw in linux. It seems that this position is justified. Generally though, switching off fsync() when starting postmaster solves most performance problems. If you wish to use raw io to get beyond the 32bit file limit of intel linux.... well maybe you should be using 64bit linux. Sparc, alpha, MIPS. If you want a radical filesystem, checkout the reisferfs. I'm sure we would all be interested to hear the results. regards John
Greetings ! I'd appreciate hearing more about the difference between the "good" and the "bad" drives, so that I can avoid the latter. Thanks, Courtney ------------------------------------------------------------------------ The Hermit Hacker wrote: > > On Wed, 15 Dec 1999, Jeff Hoffmann wrote: > > > > my preference tends to be software raid...whatever I've ever seen as far > > > as hardware raid is concerned has been quite slower then software > > > raid...and this is with high-end servers... > > > > i kind of question this, and here's why: i just set up a linux dual > > P3/256MB with 4 software raid 5 volumes and even loading data into one > > of the databases slows it to a crawl. i've been looking around because > > it seems absurd that the machine should slow down so much. i haven't > > really found any answers, but i have seen several places which told me > > that software raid under linux _isn't_ safe for multiprocessors & no > > place has told me for sure that it is. > > What filesystem? I know (thank god) very little about Linux, but > there have been comments here by some Linux folks (Thomas, wasn't it > you?) that indicated that ext2fs sucks for this? Are you running with > fsync() on or off? > > > appreciate it. assuming this is the case, software raid wouldn't be a > > big problem if you don't do a lot of heavy writing. > > Most of my RAID tests are on Solaris+Disksuite...with good drives > in the machine, my writes are something like 18MB/s to the drive, stripe'd > and mirrored...I think reads worked out to be 19MB/s...(bad drives, same > setup, same machine, same OS, were net'ng me something like 3MB/s...really > killed performance *grin*) > > Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy > Systems Administrator @ hub.org > primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org > > ************