Thread: Dell PowerEdge 2950 performance
Hello,
I’ve recently been tasked with scalability/performance testing of a Dell PowerEdge 2950. This is the one with the new Intel Woodcrest Xeons. Since I haven’t seen any info on this box posted to the list, I figured people might be interested in the results, and maybe in return share a few tips on performance tweaks.
After doing some reading on the performance list, I realize that there’s a preference for Opteron; however, the goal of these experiments is to see what I can get the 2950 to do. I will also be comparing performance vs. a 1850 at some point, if there’s any interest I can post those numbers too.
Here’s the hardware:
2xDual Core 3.0 Ghz CPU (Xeon 5160- 1333Mhz FSB, 4 MB shared cache per socket)
8 GB RAM (DDR2, fully buffered, Dual Ranked, 667 Mhz)
6x300 10k RPM SAS drives
Perc 5i w/256 MB battery backed cache
The target application:
Mostly OLAP (large bulk loads, then lots of reporting, possibly moving to real-time loads in the future). All of this will be run on FreeBSD 6.1 amd64. (If I have some extra time, I might be able to run a few tests on linux just for comparison’s sake)
Test strategy:
Make sure the RAID is giving reasonable performance:
bonnie++ -d /u/bonnie -s 1000:8k
time bash -c "(dd if=/dev/zero of=bigfile count=125000 bs=8k && sync)"
Now, I realize that the above are overly simple, and not indicative of overall performance, however here’s what I’m seeing:
Single 10K 300 GB drive - ~75 Mb/s on both tests, more or less
RAID 10, 6 disks (3 sets of mirrored pairs) - ~117 Mb/s
The RAID 10 numbers look way off to me, so my next step is to go test some different RAID configs. I’m going to look at a mirrored pair, and a striped pair first, just to make sure the setup is sane. Then, RAID 5 x 6 disks, and mirrored pair + raid 10 with 4. Possibly software raid, however I’m not very familiar with this on FreeBSD.
Once I get the RAID giving me reasonable results (I would think that a raid 10 with 6 10k drives should be able to push >200 MB/s sustained IO…no?) I will move on to other more DB specific tests.
A few questions:
1) Does anyone have other suggestions for testing raw IO for the RAID?
2) What is reasonable IO (bonnie++, dd) for 4 or 6 disks- RAID 10?
3) For DB tests, I would like to compare performance on the different RAID configs and vs. the 1850. Maybe to assist also in some basic postgresql.conf and OS tuning (but that will be saved mostly for when I start application level testing). I realize that benchmarks don’t necessarily map to application performance, but it helps me establish a baseline for the hardware. I’m currently running pgbench, but would like something with a few more features (but hopefully without too much setup time). I’ve heard mention of the OSDL’s DBT tests, and I’m specifically interested in DBT-2 and DBT-3. Any suggestions here?
Here’s some initial numbers from pgbench (-s 50 –c 10 –t 100). Please keep in mind that these are default installs of FreeBSD 6.1 and Postgres 8.1.4- NO tuning yet.
1850: run1: 121 tps, run2: 132 tps, run3: 229 tps
2950: run1: 178 tps, run2: 201 tps, run3:259 tps
Obviously neither PG nor FreeBSD are taking advantage of all the hardware available in either case.
I will post the additional RAID numbers shortly…
Thanks,
Bucky
Here’s the hardware:
2xDual Core 3.0 Ghz CPU (Xeon 5160- 1333Mhz FSB, 4 MB shared cache per socket)
8 GB RAM (DDR2, fully buffered, Dual Ranked, 667 Mhz)
6x300 10k RPM SAS drives
Perc 5i w/256 MB battery backed cache
Attachment
... Is the PERC 5/i dual channel? If so, are 1/2 the drives on one channel and the other half on the other channel? I findthis helps RAID10 performance when the mirrored pairs are on separate channels. ... With the SAS controller (PERC 5/i), every drive gets it's own 3 GB/s port. ... Your transfer rate seems pretty good for Dell hardware, but I'm not experienced with SAS drives to know if those numbersare good in an absolute sense. Also, which driver picked up the SAS controller? amr(4) or aac(4) or some other? That makes a big difference too. I thinkthe amr driver is "better" than the aac driver. .. The internals of the current SAS drives are similar to the U320's they replaced in terms of read/write/seek performance,however the benefit is the SAS bus, which helps eliminate some of the U320 limitations (e.g. with Perc4, youonly get 160 MB/s per channel as you mentioned). It's using the mfi driver... Here's some simplistic performance numbers: time bash -c "(dd if=/dev/zero of=bigfile count=125000 bs=8k && sync)" Raid0 x 2 (2 spindles) ~138 MB/s on BSD Raid5 x 4 ~160 MB/s BSD, ~274 MB/s Knoppix (ext2) Raid5 x 6 ~255 MB/s BSD, 265 MB/s Knoppix (ext3) Raid10 x 4 ~25 MB/s BSD Raid50 x 6 ~144 MB/s BSD, 271 MB/s Knoppix * BSD is 6.1-RELEASE amd64 with UFS + Soft updates, Knoppix is 5.1 (ext2 didn't like the > 1TB partition for the 6 disk RAID5, hence ext3) Seems to me the PERC5 has issues with layered raid (10, 50) as others have suggested on this list is a common problem withlower end raid cards. For now, I'm going with the RAID 5 option, however if I have time, I would like to test havingthe hardware do raid 0 and doing raid 1 in the os, or vice versa, as proposed in other posts. Also, I ran a pgbench -s 50 -c 10 -t 1000 on a completely default BSD 6.1 and PG 8.1.4 install with RAID5 x 6 disks, andgot 442 tps on a fresh run (the numbers climb very rapidly due to caching after running simultaneous tests without reinitializingthe test db. I'm guessing this is due to OS caching since the default postgresql.conf is pretty limited interms of resource use). I probably need to up the scaling factor significantly so the whole data set doesn't get cachedin RAM if I want realistic results from simultaneous tests, but it seems quicker to just reinit each time at this point. On to some kernel tweaks and some adjustments to postgresql.conf... - Bucky
On Aug 14, 2006, at 3:56 PM, Bucky Jordan wrote: > Seems to me the PERC5 has issues with layered raid (10, 50) as > others have suggested on this list is a common problem with lower > end raid cards. For now, I'm going with the RAID 5 option, however > if I have time, I would like to test having the hardware do raid 0 > and doing raid 1 in the os, or vice versa, as proposed in other posts. Wow, those are pretty awesome numbers.... I'm actually inclined to try these as my DB servers again! Lately I've been using Sun X4100 with Adaptec RAID cards, but they don't transfer nearly as fast as that on simple tests. Of more interest would be a test which involved large files with lots of seeks all around (something like bonnie++ should do that). I too have noticed that Dell controllers don't like doing layered RAID levels very well. All of mine are doing plain old RAID5 or RAID1 only, and at that they are acceptable. The PERC 4/Si in the 1850 has been pretty fast at RAID1. Thanks for sharing your numbers.
Attachment
... Of more interest would be a test which involved large files with lots of seeks all around (something like bonnie++ should do that). ... Here's the bonnie++ numbers for the RAID 5 x 6 disks. I believe this was with write-through and 64k striping. I plan to run a few others with different block sizes and larger files- I'd be happy to send out a link to the list when I get a chance to post them somewhere. I've also been running some basic tests with pgbench just to help jumpstart customizing postgresql.conf, so that might be of interest too. bash-2.05b$ bonnie++ -d bonnie -s 1000:8k Version 1.93c ------Sequential Output------ --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP 1000M 587 99 246900 71 225124 76 1000 99 585723 99 8573 955 Latency 14367us 50829us 410ms 57965us 1656us 432ms Version 1.93c ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 28192 91 +++++ +++ +++++ +++ 26076 89 +++++ +++ +++++ +++ Latency 25988us 75us 37us 24756us 36us 41us 1.93c,1.93c, ,1,1155223901,1000M,,587,99,246900,71,225124,76,1000,99,585723,99,8573,9 55,16,,,,,28192,91,+++++,+++,+++++,+++,26076,89,+++++,+++,+++++,+++,1436 7us,50829us,410ms,57965us,1656us,432ms,25988us,75us,37us,24756us,36us,41 us ... Thanks for sharing your numbers. ... You're welcome- I prefer to see actual numbers rather than people simply stating that RAID controller X is better, so hopefully more people will do the same. - Bucky
Bucky, I see you are running bonnie++ version 1.93c. The numbers it reports are very different from version 1.03a, which is the one everyone runs - can you post your 1.03a numbers from bonnie++? - Luke On 8/14/06 4:38 PM, "Bucky Jordan" <bjordan@lumeta.com> wrote: > ... > Of more interest would be a test which involved large files with lots > of seeks all around (something like bonnie++ should do that). > ... > > Here's the bonnie++ numbers for the RAID 5 x 6 disks. I believe this was > with write-through and 64k striping. I plan to run a few others with > different block sizes and larger files- I'd be happy to send out a link > to the list when I get a chance to post them somewhere. I've also been > running some basic tests with pgbench just to help jumpstart customizing > postgresql.conf, so that might be of interest too. > > bash-2.05b$ bonnie++ -d bonnie -s 1000:8k > Version 1.93c ------Sequential Output------ --Sequential Input- > --Random- > Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- > --Seeks-- > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP > /sec %CP > 1000M 587 99 246900 71 225124 76 1000 99 585723 99 > 8573 955 > Latency 14367us 50829us 410ms 57965us 1656us > 432ms > Version 1.93c ------Sequential Create------ --------Random > Create-------- > -Create-- --Read--- -Delete-- -Create-- --Read--- > -Delete-- > files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP > /sec %CP > 16 28192 91 +++++ +++ +++++ +++ 26076 89 +++++ +++ > +++++ +++ > Latency 25988us 75us 37us 24756us 36us > 41us > 1.93c,1.93c, > ,1,1155223901,1000M,,587,99,246900,71,225124,76,1000,99,585723,99,8573,9 > 55,16,,,,,28192,91,+++++,+++,+++++,+++,26076,89,+++++,+++,+++++,+++,1436 > 7us,50829us,410ms,57965us,1656us,432ms,25988us,75us,37us,24756us,36us,41 > us > > ... > Thanks for sharing your numbers. > ... > > You're welcome- I prefer to see actual numbers rather than people simply > stating that RAID controller X is better, so hopefully more people will > do the same. > > - Bucky > > ---------------------------(end of broadcast)--------------------------- > TIP 6: explain analyze is your friend >
... I see you are running bonnie++ version 1.93c. The numbers it reports are very different from version 1.03a, which is the one everyone runs - can you post your 1.03a numbers from bonnie++? ... Luke, Thanks for the pointer. Here's the 1.03 numbers, but at the moment I'm only able to run them on the 6 disk RAID 5 setup (128k stripe, writeback enabled since the Perc5 does have a battery backed cache). bonnie++ -d bonnie -s 1000:8k Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP 1000M 155274 95 265359 44 232958 52 166884 99 1054455 99 +++++ +++ ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 +++++ +++ +++++ +++ +++++ +++ 30550 88 +++++ +++ +++++ +++ ,1000M,155274,95,265359,44,232958,52,166884,99,1054455,99,+++++,+++,16,+ ++++,+++,+++++,+++,+++++,+++,30550,88,+++++,+++,+++++,+++ - Bucky
Bucky, I don't know why I missed this the first time - you need to let bonnie++ pick the file size - it needs to be 2x memory or the results you get will not be accurate. In this case you've got a 1GB file, which nicely fits in RAM. - Luke On 8/15/06 6:56 AM, "Bucky Jordan" <bjordan@lumeta.com> wrote: > ... > I see you are running bonnie++ version 1.93c. The numbers it reports are > very different from version 1.03a, which is the one everyone runs - can > you > post your 1.03a numbers from bonnie++? > ... > > Luke, > > Thanks for the pointer. Here's the 1.03 numbers, but at the moment I'm > only able to run them on the 6 disk RAID 5 setup (128k stripe, writeback > enabled since the Perc5 does have a battery backed cache). > > bonnie++ -d bonnie -s 1000:8k > Version 1.03 ------Sequential Output------ --Sequential Input- > --Random- > -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- > --Seeks-- > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP > /sec %CP > 1000M 155274 95 265359 44 232958 52 166884 99 > 1054455 99 +++++ +++ > ------Sequential Create------ --------Random > Create-------- > -Create-- --Read--- -Delete-- -Create-- --Read--- > -Delete-- > files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP > /sec %CP > 16 +++++ +++ +++++ +++ +++++ +++ 30550 88 +++++ +++ > +++++ +++ > ,1000M,155274,95,265359,44,232958,52,166884,99,1054455,99,+++++,+++,16,+ > ++++,+++,+++++,+++,+++++,+++,30550,88,+++++,+++,+++++,+++ > > - Bucky >
On Aug 15, 2006, at 2:50 PM, Luke Lonergan wrote: > I don't know why I missed this the first time - you need to let > bonnie++ > pick the file size - it needs to be 2x memory or the results you > get will > not be accurate. which is an issue with freebsd and bonnie++ since it doesn't know that freebsd can use large files natively (ie, no large file hacks necessary). the freebsd port of bonnie takes care of this, if you use that instead of compiling your own.
Attachment
Luke, For some reason it looks like bonnie is picking a 300M file. > bonnie++ -d bonnie Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP 300M 179028 99 265358 41 270175 57 167989 99 +++++ +++ +++++ +++ ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ ,300M,179028,99,265358,41,270175,57,167989,99,+++++,+++,+++++,+++,16,+++ ++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++ So here's results when I force it to use a 16GB file, which is twice the amount of physical ram in the system: > bonnie++ -d bonnie -s 16000:8k Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP 16000M 158539 99 244430 50 58647 29 83252 61 144240 21 789.8 7 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 7203 54 +++++ +++ +++++ +++ 24555 42 +++++ +++ +++++ +++ ,16000M,158539,99,244430,50,58647,29,83252,61,144240,21,789.8,7,16,7203, 54,+++++,+++,+++++,+++,24555,42,+++++,+++,+++++,+++ ... from Vivek... which is an issue with freebsd and bonnie++ since it doesn't know that freebsd can use large files natively (ie, no large file hacks necessary). the freebsd port of bonnie takes care of this, if you use that instead of compiling your own. ... Unfortunately I had to download and build by hand, since only bonnie++ 1.9x is available in BSD 6.1 ports when I checked. One other question- would the following also be mostly a test of RAM? I wouldn't think so since it should force it to sync to disk... time bash -c "(dd if=/dev/zero of=/data/bigfile count=125000 bs=8k && sync)" Oh, and while I'm thinking about it, I believe Postgres uses 8k data pages correct? On the RAID, I'm using 128k stripes. I know there's been posts on this before, but is there any way to tell postgres to use this in an effective way? Thanks, Bucky -----Original Message----- From: pgsql-performance-owner@postgresql.org [mailto:pgsql-performance-owner@postgresql.org] On Behalf Of Vivek Khera Sent: Tuesday, August 15, 2006 3:18 PM To: Pgsql-Performance ((E-mail)) Subject: Re: [PERFORM] Dell PowerEdge 2950 performance On Aug 15, 2006, at 2:50 PM, Luke Lonergan wrote: > I don't know why I missed this the first time - you need to let > bonnie++ > pick the file size - it needs to be 2x memory or the results you > get will > not be accurate. which is an issue with freebsd and bonnie++ since it doesn't know that freebsd can use large files natively (ie, no large file hacks necessary). the freebsd port of bonnie takes care of this, if you use that instead of compiling your own.
On Aug 15, 2006, at 4:21 PM, Bucky Jordan wrote: > ... from Vivek... > which is an issue with freebsd and bonnie++ since it doesn't know > that freebsd can use large files natively (ie, no large file hacks > necessary). the freebsd port of bonnie takes care of this, if you > use that instead of compiling your own. > ... > > Unfortunately I had to download and build by hand, since only bonnie++ > 1.9x is available in BSD 6.1 ports when I checked. see the patch file in the bonnie++ port file and apply something similar. basically you take out the check for large file support and force it on.
Attachment
Cool - seems like the posters caught that "auto memory pick" problem before you posted, but you got the 16GB/8k parts right. Now we're looking at realistic numbers - 790 seeks/second, 244MB/s sequential write, but only 144MB/s sequential reads, perhaps 60% of what it should be. Seems like a pretty good performer in general - if it was Linux I'd play with the max readahead in the I/O scheduler to improve the sequential reads. - Luke On 8/15/06 1:21 PM, "Bucky Jordan" <bjordan@lumeta.com> wrote: > Luke, > > For some reason it looks like bonnie is picking a 300M file. > >> bonnie++ -d bonnie > Version 1.03 ------Sequential Output------ --Sequential Input- > --Random- > -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- > --Seeks-- > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP > /sec %CP > 300M 179028 99 265358 41 270175 57 167989 99 +++++ +++ > +++++ +++ > ------Sequential Create------ --------Random > Create-------- > -Create-- --Read--- -Delete-- -Create-- --Read--- > -Delete-- > files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP > /sec %CP > 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ > +++++ +++ > ,300M,179028,99,265358,41,270175,57,167989,99,+++++,+++,+++++,+++,16,+++ > ++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++ > > So here's results when I force it to use a 16GB file, which is twice the > amount of physical ram in the system: > >> bonnie++ -d bonnie -s 16000:8k > Version 1.03 ------Sequential Output------ --Sequential Input- > --Random- > -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- > --Seeks-- > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP > /sec %CP > 16000M 158539 99 244430 50 58647 29 83252 61 144240 21 > 789.8 7 > ------Sequential Create------ --------Random > Create-------- > -Create-- --Read--- -Delete-- -Create-- --Read--- > -Delete-- > files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP > /sec %CP > 16 7203 54 +++++ +++ +++++ +++ 24555 42 +++++ +++ > +++++ +++ > ,16000M,158539,99,244430,50,58647,29,83252,61,144240,21,789.8,7,16,7203, > 54,+++++,+++,+++++,+++,24555,42,+++++,+++,+++++,+++ > > ... from Vivek... > which is an issue with freebsd and bonnie++ since it doesn't know > that freebsd can use large files natively (ie, no large file hacks > necessary). the freebsd port of bonnie takes care of this, if you > use that instead of compiling your own. > ... > > Unfortunately I had to download and build by hand, since only bonnie++ > 1.9x is available in BSD 6.1 ports when I checked. > > One other question- would the following also be mostly a test of RAM? I > wouldn't think so since it should force it to sync to disk... > time bash -c "(dd if=/dev/zero of=/data/bigfile count=125000 bs=8k && > sync)" > > Oh, and while I'm thinking about it, I believe Postgres uses 8k data > pages correct? On the RAID, I'm using 128k stripes. I know there's been > posts on this before, but is there any way to tell postgres to use this > in an effective way? > > Thanks, > > Bucky > > -----Original Message----- > From: pgsql-performance-owner@postgresql.org > [mailto:pgsql-performance-owner@postgresql.org] On Behalf Of Vivek Khera > Sent: Tuesday, August 15, 2006 3:18 PM > To: Pgsql-Performance ((E-mail)) > Subject: Re: [PERFORM] Dell PowerEdge 2950 performance > > > On Aug 15, 2006, at 2:50 PM, Luke Lonergan wrote: > >> I don't know why I missed this the first time - you need to let >> bonnie++ >> pick the file size - it needs to be 2x memory or the results you >> get will >> not be accurate. > > which is an issue with freebsd and bonnie++ since it doesn't know > that freebsd can use large files natively (ie, no large file hacks > necessary). the freebsd port of bonnie takes care of this, if you > use that instead of compiling your own. > >
Luke, Thanks for the tips. I'm running FreeBSD 6.1 amd64, but, I can also enable readahead on the raid controller, and also adaptive readahead. Here's tests: Readahead & writeback enabled: bash-2.05b$ bonnie++ -d bonnie -s 16000:8k Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP 16000M 156512 98 247520 47 59560 27 83138 60 143588 21 792.8 7 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 +++++ +++ +++++ +++ 27789 99 +++++ +++ +++++ +++ +++++ +++ ,16000M,156512,98,247520,47,59560,27,83138,60,143588,21,792.8,7,16,+++++ ,+++,+++++,+++,27789,99,+++++,+++,+++++,+++,+++++,+++ Writeback and Adaptive Readahead: bash-2.05b$ bonnie++ -d bonnie -s 16000:8k Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP 16000M 155542 97 246910 47 60356 26 82798 60 143321 21 787.3 6 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 6329 49 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ ,16000M,155542,97,246910,47,60356,26,82798,60,143321,21,787.3,6,16,6329, 49,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++ (As a side note- according to the controller docs, Adaptive read ahead reads ahead sequentially if there are two reads from sequential sectors, otherwise it doesn't). So, I'm thinking that the RAID controller doesn't really help with this too much- I'd think the OS could do a better job deciding when to read ahead. So, I've set it back to no readahead and the next step is to look at OS level file system tuning. Also, if I have time, I'll try doing RAID 1 on the controller, and RAID 0 on the OS (or vice versa). Since I have 6 disks, I could do a stripe of 3 mirrored pairs (raid 10) or a mirror of two striped sets of 3 (0+1). I suppose theoretically speaking, they should have the same performance characteristics, however I doubt they will in practice. Thanks, Bucky -----Original Message----- From: Luke Lonergan [mailto:llonergan@greenplum.com] Sent: Wednesday, August 16, 2006 2:18 AM To: Bucky Jordan; Vivek Khera; Pgsql-Performance ((E-mail)) Subject: Re: [PERFORM] Dell PowerEdge 2950 performance Cool - seems like the posters caught that "auto memory pick" problem before you posted, but you got the 16GB/8k parts right. Now we're looking at realistic numbers - 790 seeks/second, 244MB/s sequential write, but only 144MB/s sequential reads, perhaps 60% of what it should be. Seems like a pretty good performer in general - if it was Linux I'd play with the max readahead in the I/O scheduler to improve the sequential reads. - Luke On 8/15/06 1:21 PM, "Bucky Jordan" <bjordan@lumeta.com> wrote: > Luke, > > For some reason it looks like bonnie is picking a 300M file. > >> bonnie++ -d bonnie > Version 1.03 ------Sequential Output------ --Sequential Input- > --Random- > -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- > --Seeks-- > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP > /sec %CP > 300M 179028 99 265358 41 270175 57 167989 99 +++++ +++ > +++++ +++ > ------Sequential Create------ --------Random > Create-------- > -Create-- --Read--- -Delete-- -Create-- --Read--- > -Delete-- > files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP > /sec %CP > 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ > +++++ +++ > ,300M,179028,99,265358,41,270175,57,167989,99,+++++,+++,+++++,+++,16,+++ > ++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++ > > So here's results when I force it to use a 16GB file, which is twice the > amount of physical ram in the system: > >> bonnie++ -d bonnie -s 16000:8k > Version 1.03 ------Sequential Output------ --Sequential Input- > --Random- > -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- > --Seeks-- > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP > /sec %CP > 16000M 158539 99 244430 50 58647 29 83252 61 144240 21 > 789.8 7 > ------Sequential Create------ --------Random > Create-------- > -Create-- --Read--- -Delete-- -Create-- --Read--- > -Delete-- > files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP > /sec %CP > 16 7203 54 +++++ +++ +++++ +++ 24555 42 +++++ +++ > +++++ +++ > ,16000M,158539,99,244430,50,58647,29,83252,61,144240,21,789.8,7,16,7203, > 54,+++++,+++,+++++,+++,24555,42,+++++,+++,+++++,+++ > > ... from Vivek... > which is an issue with freebsd and bonnie++ since it doesn't know > that freebsd can use large files natively (ie, no large file hacks > necessary). the freebsd port of bonnie takes care of this, if you > use that instead of compiling your own. > ... > > Unfortunately I had to download and build by hand, since only bonnie++ > 1.9x is available in BSD 6.1 ports when I checked. > > One other question- would the following also be mostly a test of RAM? I > wouldn't think so since it should force it to sync to disk... > time bash -c "(dd if=/dev/zero of=/data/bigfile count=125000 bs=8k && > sync)" > > Oh, and while I'm thinking about it, I believe Postgres uses 8k data > pages correct? On the RAID, I'm using 128k stripes. I know there's been > posts on this before, but is there any way to tell postgres to use this > in an effective way? > > Thanks, > > Bucky > > -----Original Message----- > From: pgsql-performance-owner@postgresql.org > [mailto:pgsql-performance-owner@postgresql.org] On Behalf Of Vivek Khera > Sent: Tuesday, August 15, 2006 3:18 PM > To: Pgsql-Performance ((E-mail)) > Subject: Re: [PERFORM] Dell PowerEdge 2950 performance > > > On Aug 15, 2006, at 2:50 PM, Luke Lonergan wrote: > >> I don't know why I missed this the first time - you need to let >> bonnie++ >> pick the file size - it needs to be 2x memory or the results you >> get will >> not be accurate. > > which is an issue with freebsd and bonnie++ since it doesn't know > that freebsd can use large files natively (ie, no large file hacks > necessary). the freebsd port of bonnie takes care of this, if you > use that instead of compiling your own. > >
Bucky Jordan wrote: > Here's some simplistic performance numbers: > time bash -c "(dd if=/dev/zero of=bigfile count=125000 bs=8k && sync)" > > Raid0 x 2 (2 spindles) ~138 MB/s on BSD > PE2950 FreeBSD6.1 i386 raid0 (2spindles): time csh -c "(dd if=/dev/zero of=/data/bigfile count=125000 bs=8k && sync)" 125000+0 records in 125000+0 records out 1024000000 bytes transferred in 7.070130 secs (144834680 bytes/sec) 0.070u 2.677s 0:07.11 38.5% 23+224k 31+7862io 0pf+0w mfi0: <Dell PERC 5/i> . I recompiled kernel to get latest mfi driver. Also "bce" NIC driver is buggy for 6.1 kernel you got in CD distro. Make sure you have latest drivers for bsd 6.1. bonnie++ Version 1.93c ------Sequential Output------ --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP raid0 16000M 262 99 116527 38 26451 12 495 99 135301 46 323.5 15 Latency 32978us 323ms 242ms 23842us 171ms 1370ms Version 1.93c ------Sequential Create------ --------Random Create-------- raid0 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 5837 19 +++++ +++ +++++ +++ 3463 11 +++++ +++ +++++ +++ Latency 555ms 422us 43us 1023ms 52us 60us 1.93c,1.93c,raid0,1,1155819725,16000M,,262,99,116527,38,26451,12,495,99,135301,46,323.5,15,16,,,,,5837,19,+++++,+++,+++++,+++,3463,11,+++++,+++,+++++,+++,32978us,323ms,242ms,23842us,171ms,1370ms,555ms,422us,43us,1023ms,52us,60us -- Best Regards, alvis