Thread: Need some help analyzing some benchmarks

Need some help analyzing some benchmarks

From
"Benjamin Krajmalnik"
Date:

Before I deploy some new servers, I figured I would do some benchmarking.

Server is a Dual E5620, 96GB RAM, 16 x 450GB SAS(15K) drives.

Controller is an Areca 1680 with 2GB RAM and battery backup.

So far I have only run bonie++ since each cycle is quite long (writing 192GB).

 

My data partition is 12 drives in RAID 1+0 (2.7TB) running  UFS2.  Vfs.read_max has been set to 32, and no other tuning has been done.

Files system is not mounted with noatime at this point.

Below are the results:

 

 

db1# bonnie++ -d /usr/local/pgsql -c 4 -n 2:10000000:1000000:64 -u pgsql

Version  1.96       ------Sequential Output------ --Sequential Input- --Random-

Concurrency   4     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--

Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP

db1.stackdump. 192G   860  99 213731  52 28518  45  1079  70 155479  34  49.9  12

Latency             10008us    2385ms    1190ms     457ms    2152ms     231ms

Version  1.96       ------Sequential Create------ --------Random Create--------

db1.stackdump.local -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--

files:max:min        /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP

2:10000000:1000000/64    49  33   128  96   277  97    57  39   130  90   275  97

Latency               660ms   13954us   13003us     904ms     334ms   13365us

 

Not having anything to compare it to, I do not know if these are decent numbers or not – they are definitely slower than a similar setup which was posted recently using  XFS on Linux, but I have not found anything in FreeBSD using UFS2 to compare it  to.  What strikes me in particular is that the write performance is higher than the read performance – I would have intuitively expected it to be the other way around.

 

My log partition is a RAID1, same drives.  Performance follows:

 

Version  1.96       ------Sequential Output------ --Sequential Input- --Random-

Concurrency   4     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--

Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP

db1.stackdump. 192G   861  99 117023  28 20142  43   359  23 109719  24 419.5  12

Latency              9890us   13227ms    8944ms    3623ms    2236ms     252ms

Version  1.96       ------Sequential Create------ --------Random Create--------

db1.stackdump.local -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--

files:max:min        /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP

2:10000000:1000000/64    24  16   121  93   276  97    22  15   134  93   275  97

Latency              4070ms   14029us   13079us   15016ms     573ms   13369us

 

After seeing these results, I decided to download the areca cli and check the actual setup

Info from the RAID controller follows:

 

CLI> sys info

The System Information

===========================================

Main Processor     : 1200MHz

CPU ICache Size    : 32KB

CPU DCache Size    : 32KB

CPU SCache Size    : 512KB

System Memory      : 2048MB/533MHz/ECC

Firmware Version   : V1.48 2010-10-21

BOOT ROM Version   : V1.48 2010-01-04

Serial Number      : Y051CABVAR600825

Controller Name    : ARC-1680

Current IP Address : 192.168.1.100

 

CLI> rsf info raid=2

Raid Set Information

===========================================

Raid Set Name        : Raid Set # 001 

Member Disks         : 12

Total Raw Capacity   : 5400.0GB

Free Raw Capacity    : 0.0GB

Min Member Disk Size : 450.0GB

Raid Set State       : Normal

 

CLI> vsf info vol=2

Volume Set Information

===========================================

Volume Set Name : ARC-1680-VOL#001

Raid Set Name   : Raid Set # 001 

Volume Capacity : 2700.0GB

SCSI Ch/Id/Lun  : 00/00/01

Raid Level      : Raid1+0

Stripe Size     : 8K

Member Disks    : 12

Cache Mode      : Write Back

Tagged Queuing  : Enabled

Volume State    : Normal

===========================================

 

Having done this, I noticed that the stripe size is configured to 8K.

I am thinking the problem may be due to the stripe size.  I had asked the vendor to set up the file system for these two arrays with 8K blocks, and I believe they may have misunderstood my request and set the stripe size to 8K.  I assume increasing the stripe size will improve the performance.

What stripe sizes are you typically using?  I was planning on setting it up with a 64K stripe size.

 

TIA,

 

Benjamin

 

Re: Need some help analyzing some benchmarks

From
Greg Smith
Date:
Benjamin Krajmalnik wrote:

My data partition is 12 drives in RAID 1+0 (2.7TB) running  UFS2.  Vfs.read_max has been set to 32, and no other tuning has been done...

Not having anything to compare it to, I do not know if these are decent numbers or not – they are definitely slower than a similar setup which was posted recently using  XFS on Linux, but I have not found anything in FreeBSD using UFS2 to compare it  to.  What strikes me in particular is that the write performance is higher than the read performance – I would have intuitively expected it to be the other way around.


Generally write speed higher than read means that volume read-ahead still isn't high enough for the OS to keep the disk completely busy.  Try increasing read_max further; I haven't done many such tests on FreeBSD, but as far as I know settings of 128 and 256 are generally where read performance peaks on that OS.  You should see sequential read speed go up as you increase that parameter, eventually levelling off.  When you reach that point you've found the right setting.

-- 
Greg Smith   2ndQuadrant US    greg@2ndQuadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us
"PostgreSQL 9.0 High Performance": http://www.2ndQuadrant.com/books