Re: RAID stripe size question - Mailing list pgsql-performance
From | Alex Turner |
---|---|
Subject | Re: RAID stripe size question |
Date | |
Msg-id | 33c6269f0607181227g7c6eea1av5b8dbd9787bfd1c7@mail.gmail.com Whole thread Raw |
In response to | Re: RAID stripe size question ("Luke Lonergan" <llonergan@greenplum.com>) |
Responses |
Re: RAID stripe size question
|
List | pgsql-performance |
This is a great testament to the fact that very often software RAID will seriously outperform hardware RAID because the OS guys who implemented it took the time to do it right, as compared with some controller manufacturers who seem to think it's okay to provided sub-standard performance.
Based on the bonnie++ numbers comming back from your array, I would also encourage you to evaluate software RAID, as you might see significantly better performance as a result. RAID 10 is also a good candidate as it's not so heavy on the cache and CPU as RAID 5.
Alex.
Based on the bonnie++ numbers comming back from your array, I would also encourage you to evaluate software RAID, as you might see significantly better performance as a result. RAID 10 is also a good candidate as it's not so heavy on the cache and CPU as RAID 5.
Alex.
On 7/18/06, Luke Lonergan <llonergan@greenplum.com> wrote:
Mikael,
On 7/18/06 6:34 AM, "Mikael Carneholm" < Mikael.Carneholm@WirelessCar.com> wrote:
> However, what's more important is the seeks/s - ~530/s on a 28 disk
> array is quite lousy compared to the 1400/s on a 12 x 15Kdisk arrayI'm getting 2500 seeks/second on a 36 disk SATA software RAID (ZFS, Solaris 10) on a Sun X4500:
=========== Single Stream ============
With a very recent update to the zfs module that improves I/O scheduling and prefetching, I get the following bonnie++ 1.03a results with a 36 drive RAID10, Solaris 10 U2 on an X4500 with 500GB Hitachi drives (zfs checksumming is off):Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CPthumperdw-i-1 32G 120453 99 467814 98 290391 58 109371 99 993344 94 1801 4
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP16 +++++ +++ +++++ +++ +++++ +++ 30850 99 +++++ +++ +++++ +++
=========== Two Streams ============
Bumping up the number of concurrent processes to 2, we get about 1.5x speed reads of RAID10 with a concurrent workload (you have to add the rates together):Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CPthumperdw-i-1 32G 111441 95 212536 54 171798 51 106184 98 719472 88 1233 2
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP16 26085 90 +++++ +++ 5700 98 21448 97 +++++ +++ 4381 97
Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CPthumperdw-i-1 32G 116355 99 212509 54 171647 50 106112 98 715030 87 1274 3
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP16 26082 99 +++++ +++ 5588 98 21399 88 +++++ +++ 4272 97
So that's 2500 seeks per second, 1440MB/s sequential block read, 212MB/s per character sequential read.
=======================
- Luke
pgsql-performance by date: