Thread: Re: Performance with 2 AMD/Opteron 2.6Ghz and 8gig
Mikael, > -----Original Message----- > From: Mikael Carneholm [mailto:Mikael.Carneholm@WirelessCar.com] > Sent: Friday, July 28, 2006 2:05 AM > > My bonnie++ results are found in this message: > http://archives.postgresql.org/pgsql-performance/2006-07/msg00164.php > Apologies if I've already said this, but those bonnie++ results are very disappointing. The sequential transfer rates between 20MB/s and 57MB/s are slower than a single SATA disk, and your SCSI disks might even do 80MB/s sequential transfer rate each. Random access is also very poor, though perhaps equal to 5 disk drives at 500/second. By comparison, we routinely get 950MB/s sequential transfer rate using 16 SATA disks and 3Ware 9550SX SATA RAID adapters on Linux. On Solaris ZFS on an X4500, we recently got this bonnie++ result on 36 SATA disk drives in RAID10 (single thread first): Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP thumperdw-i-1 32G 120453 99 467814 98 290391 58 109371 99 993344 94 1801 4 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 +++++ +++ +++++ +++ +++++ +++ 30850 99 +++++ +++ +++++ +++ Bumping up the number of concurrent processes to 2, we get about 1.5x speed reads of RAID10 with a concurrent workload (you have to add the rates together): Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP thumperdw-i-1 32G 111441 95 212536 54 171798 51 106184 98 719472 88 1233 2 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 26085 90 +++++ +++ 5700 98 21448 97 +++++ +++ 4381 97 Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP thumperdw-i-1 32G 116355 99 212509 54 171647 50 106112 98 715030 87 1274 3 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 26082 99 +++++ +++ 5588 98 21399 88 +++++ +++ 4272 97 So that's 2500 seeks per second, 1440MB/s sequential block read, 212MB/s per character sequential read. - Luke
I too have a DL385 with a single DC Opteron 270. It claims to have a smart array 6i controller and over the last couple of days I've been runnign some tests on it, which have been yielding some suprising results. I've got 6 10k U320 disks in it. 2 are in a mirror set. We'll not pay any attention to them. The remaining 4 disks I've been toying with to see what config works best, using hardware raid and software raid. system info: dl dl385 - 1 opteron 270 - 5GB ram - smart array 6i cciss0: HP Smart Array 6i Controller Firmware Version: 2.58 Linux db03 2.6.17-1.2157_FC5 #1 SMP Tue Jul 11 22:53:56 EDT 2006 x86_64 x86_64 x86_64 GNU/Linux using xfs Each drive can sustain 80MB/sec read (dd, straight off device) So here are the results I have so far. (averaged) hardware raid 5: dd - write 20GB file - 48MB/sec dd - read 20GB file - 247MB/sec [ didn't do a bonnie run on this yet ] pretty terrible write performance. good read. hardware raid 10 dd - write 20GB - 104MB/sec dd - read 20GB - 196MB/sec bonnie++ Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- -- Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec % CP /sec %CP db03 9592M 45830 97 129501 31 62981 14 48524 99 185818 19 949.0 1 software raid 5 dd - write 20gb - 85MB/sec dd - read 20gb - 135MB/sec I was very suprised at those results. I was sort of expecting it to smoke the hardware. I repeated the test many times, and kept getting these numbers. bonnie++: Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- -- Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec % CP /sec %CP db03 9592M 44110 97 81481 23 34604 10 44495 95 157063 28 919.3 1 software 10: dd - write - 20GB - 108MB/sec dd - read - 20GB - 86MB/sec(!!!! WTF? - this is repeatable!!) bonnie++ Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- -- Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec % CP /sec %CP db03 9592M 44539 98 105444 20 34127 8 39830 83 100374 10 1072 1 so I'm going to be going with hw r5, which went against what I thought going in - read perf is more important for my usage than write. I'm still not sure about that software 10 read number. something is not right there... -- Jeff Trout <jeff@jefftrout.com> http://www.dellsmartexitin.com/ http://www.stuarthamm.net/
This isn't all that surprising. The main weaknesses of RAID-5 are poor write performance and stupid hardware controllers that make the write performance even worse than it needs to be. Your numbers bear that out. Reads off RAID-5 are usually pretty good. Your 'dd' test is going to be a little misleading though. Most DB access isn't usually purely sequential; while it's easy to see why HW RAID-5 might outperform HW-RAID-10 in large sequential reads (the RAID controller would need to be smarter than most to make RAID-10 as fast as RAID-5), I would expect that HW RAID-5 and RAID-10 random reads would be about equal or else maybe give a slight edge to RAID-10. -- Mark Lewis On Fri, 2006-07-28 at 13:31 -0400, Jeff Trout wrote: > I too have a DL385 with a single DC Opteron 270. > It claims to have a smart array 6i controller and over the last > couple of days I've been runnign some tests on it, which have been > yielding some suprising results. > > I've got 6 10k U320 disks in it. 2 are in a mirror set. We'll not > pay any attention to them. > The remaining 4 disks I've been toying with to see what config works > best, using hardware raid and software raid. > > system info: > dl dl385 - 1 opteron 270 - 5GB ram - smart array 6i > cciss0: HP Smart Array 6i Controller > Firmware Version: 2.58 > Linux db03 2.6.17-1.2157_FC5 #1 SMP Tue Jul 11 22:53:56 EDT 2006 > x86_64 x86_64 x86_64 GNU/Linux > using xfs > > Each drive can sustain 80MB/sec read (dd, straight off device) > > So here are the results I have so far. (averaged) > > > hardware raid 5: > dd - write 20GB file - 48MB/sec > dd - read 20GB file - 247MB/sec > [ didn't do a bonnie run on this yet ] > pretty terrible write performance. good read. > > hardware raid 10 > dd - write 20GB - 104MB/sec > dd - read 20GB - 196MB/sec > bonnie++ > Version 1.03 ------Sequential Output------ --Sequential Input- > --Random- > -Per Chr- --Block-- -Rewrite- -Per Chr- -- > Block-- --Seeks-- > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec % > CP /sec %CP > db03 9592M 45830 97 129501 31 62981 14 48524 99 185818 > 19 949.0 1 > > software raid 5 > dd - write 20gb - 85MB/sec > dd - read 20gb - 135MB/sec > > I was very suprised at those results. I was sort of expecting it to > smoke the hardware. I repeated the test many times, and kept getting > these numbers. > > bonnie++: > Version 1.03 ------Sequential Output------ --Sequential Input- > --Random- > -Per Chr- --Block-- -Rewrite- -Per Chr- -- > Block-- --Seeks-- > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec % > CP /sec %CP > db03 9592M 44110 97 81481 23 34604 10 44495 95 157063 > 28 919.3 1 > > software 10: > dd - write - 20GB - 108MB/sec > dd - read - 20GB - 86MB/sec(!!!! WTF? - this is repeatable!!) > bonnie++ > Version 1.03 ------Sequential Output------ --Sequential Input- > --Random- > -Per Chr- --Block-- -Rewrite- -Per Chr- -- > Block-- --Seeks-- > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec % > CP /sec %CP > db03 9592M 44539 98 105444 20 34127 8 39830 83 100374 > 10 1072 1 > > > so I'm going to be going with hw r5, which went against what I > thought going in - read perf is more important for my usage than write. > > I'm still not sure about that software 10 read number. something is > not right there... > > -- > Jeff Trout <jeff@jefftrout.com> > http://www.dellsmartexitin.com/ > http://www.stuarthamm.net/ > > > > > ---------------------------(end of broadcast)--------------------------- > TIP 9: In versions below 8.0, the planner will ignore your desire to > choose an index scan if your joining column's datatypes do not > match
Jeff, On 7/28/06 10:31 AM, "Jeff Trout" <threshar@torgo.978.org> wrote: > I'm still not sure about that software 10 read number. something is > not right there... It's very consistent with what we've seen before - the hardware RAID controller doesn't do JBOD with SCSI command queuing like a simple SCSI controller would do. The Smart Array 6402 makes a very bad SCSI controller for software RAID. The hardware results look very good - seems like the 2.6.17 linux kernel has a drastically improved CCISS driver as compared to what I've previously seen. - Luke