Need some help analyzing some benchmarks - Mailing list pgsql-performance

From Benjamin Krajmalnik
Subject Need some help analyzing some benchmarks
Date
Msg-id F4E6A2751A2823418A21D4A160B689887B0E77@fletch.stackdump.local
Whole thread Raw
Responses Re: Need some help analyzing some benchmarks
List pgsql-performance

Before I deploy some new servers, I figured I would do some benchmarking.

Server is a Dual E5620, 96GB RAM, 16 x 450GB SAS(15K) drives.

Controller is an Areca 1680 with 2GB RAM and battery backup.

So far I have only run bonie++ since each cycle is quite long (writing 192GB).

 

My data partition is 12 drives in RAID 1+0 (2.7TB) running  UFS2.  Vfs.read_max has been set to 32, and no other tuning has been done.

Files system is not mounted with noatime at this point.

Below are the results:

 

 

db1# bonnie++ -d /usr/local/pgsql -c 4 -n 2:10000000:1000000:64 -u pgsql

Version  1.96       ------Sequential Output------ --Sequential Input- --Random-

Concurrency   4     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--

Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP

db1.stackdump. 192G   860  99 213731  52 28518  45  1079  70 155479  34  49.9  12

Latency             10008us    2385ms    1190ms     457ms    2152ms     231ms

Version  1.96       ------Sequential Create------ --------Random Create--------

db1.stackdump.local -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--

files:max:min        /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP

2:10000000:1000000/64    49  33   128  96   277  97    57  39   130  90   275  97

Latency               660ms   13954us   13003us     904ms     334ms   13365us

 

Not having anything to compare it to, I do not know if these are decent numbers or not – they are definitely slower than a similar setup which was posted recently using  XFS on Linux, but I have not found anything in FreeBSD using UFS2 to compare it  to.  What strikes me in particular is that the write performance is higher than the read performance – I would have intuitively expected it to be the other way around.

 

My log partition is a RAID1, same drives.  Performance follows:

 

Version  1.96       ------Sequential Output------ --Sequential Input- --Random-

Concurrency   4     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--

Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP

db1.stackdump. 192G   861  99 117023  28 20142  43   359  23 109719  24 419.5  12

Latency              9890us   13227ms    8944ms    3623ms    2236ms     252ms

Version  1.96       ------Sequential Create------ --------Random Create--------

db1.stackdump.local -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--

files:max:min        /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP

2:10000000:1000000/64    24  16   121  93   276  97    22  15   134  93   275  97

Latency              4070ms   14029us   13079us   15016ms     573ms   13369us

 

After seeing these results, I decided to download the areca cli and check the actual setup

Info from the RAID controller follows:

 

CLI> sys info

The System Information

===========================================

Main Processor     : 1200MHz

CPU ICache Size    : 32KB

CPU DCache Size    : 32KB

CPU SCache Size    : 512KB

System Memory      : 2048MB/533MHz/ECC

Firmware Version   : V1.48 2010-10-21

BOOT ROM Version   : V1.48 2010-01-04

Serial Number      : Y051CABVAR600825

Controller Name    : ARC-1680

Current IP Address : 192.168.1.100

 

CLI> rsf info raid=2

Raid Set Information

===========================================

Raid Set Name        : Raid Set # 001 

Member Disks         : 12

Total Raw Capacity   : 5400.0GB

Free Raw Capacity    : 0.0GB

Min Member Disk Size : 450.0GB

Raid Set State       : Normal

 

CLI> vsf info vol=2

Volume Set Information

===========================================

Volume Set Name : ARC-1680-VOL#001

Raid Set Name   : Raid Set # 001 

Volume Capacity : 2700.0GB

SCSI Ch/Id/Lun  : 00/00/01

Raid Level      : Raid1+0

Stripe Size     : 8K

Member Disks    : 12

Cache Mode      : Write Back

Tagged Queuing  : Enabled

Volume State    : Normal

===========================================

 

Having done this, I noticed that the stripe size is configured to 8K.

I am thinking the problem may be due to the stripe size.  I had asked the vendor to set up the file system for these two arrays with 8K blocks, and I believe they may have misunderstood my request and set the stripe size to 8K.  I assume increasing the stripe size will improve the performance.

What stripe sizes are you typically using?  I was planning on setting it up with a 64K stripe size.

 

TIA,

 

Benjamin

 

pgsql-performance by date:

Previous
From: Cédric Villemain
Date:
Subject: Re: Write-heavy pg_stats_collector on mostly idle server
Next
From: Greg Smith
Date:
Subject: Re: checkpoint_completion_target and Ext3