Re: Testing Sandforce SSD - Mailing list pgsql-performance

From Yeb Havinga
Subject Re: Testing Sandforce SSD
Date
Msg-id 4C504A43.3000304@gmail.com
Whole thread Raw
In response to Re: Testing Sandforce SSD  (Yeb Havinga <yebhavinga@gmail.com>)
Responses Re: Testing Sandforce SSD  (Greg Spiegelberg <gspiegelberg@gmail.com>)
List pgsql-performance
Yeb Havinga wrote:
> Michael Stone wrote:
>> On Mon, Jul 26, 2010 at 03:23:20PM -0600, Greg Spiegelberg wrote:
>>> I know I'm talking development now but is there a case for a pg_xlog
>>> block
>>> device to remove the file system overhead and guaranteeing your data is
>>> written sequentially every time?
>>
>> If you dedicate a partition to xlog, you already get that in practice
>> with no extra devlopment.
> Due to the LBA remapping of the SSD, I'm not sure of putting files
> that are sequentially written in a different partition (together with
> e.g. tables) would make a difference: in the end the SSD will have a
> set new blocks in it's buffer and somehow arrange them into sets of
> 128KB of 256KB writes for the flash chips. See also
> http://www.anandtech.com/show/2899/2
>
> But I ran out of ideas to test, so I'm going to test it anyway.
Same machine config as mentioned before, with data and xlog on separate
partitions, ext3 with barrier off (save on this SSD).

pgbench -c 10 -M prepared -T 3600 -l test
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 300
query mode: prepared
number of clients: 10
duration: 3600 s
number of transactions actually processed: 10856359
tps = 3015.560252 (including connections establishing)
tps = 3015.575739 (excluding connections establishing)

This is about 25% faster than data and xlog combined on the same filesystem.

Below is output from iostat -xk 1 -p /dev/sda, which shows each second
per partition statistics.
sda2 is data, sda3 is xlog In the third second a checkpoint seems to start.

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          63.50    0.00   30.50    2.50    0.00    3.50

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00  6518.00   36.00 2211.00   148.00 35524.00
31.75     0.28    0.12   0.11  25.00
sda1              0.00     2.00    0.00    5.00     0.00   636.00
254.40     0.03    6.00   2.00   1.00
sda2              0.00   218.00   36.00   40.00   148.00  1032.00
31.05     0.00    0.00   0.00   0.00
sda3              0.00  6298.00    0.00 2166.00     0.00 33856.00
31.26     0.25    0.12   0.12  25.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          60.50    0.00   37.50    0.50    0.00    1.50

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00  6514.00   33.00 2283.00   140.00 35188.00
30.51     0.32    0.14   0.13  29.00
sda1              0.00     0.00    0.00    3.00     0.00    12.00
8.00     0.00    0.00   0.00   0.00
sda2              0.00     0.00   33.00    2.00   140.00     8.00
8.46     0.03    0.86   0.29   1.00
sda3              0.00  6514.00    0.00 2278.00     0.00 35168.00
30.88     0.29    0.13   0.13  29.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          33.00    0.00   34.00   18.00    0.00   15.00

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00  3782.00    7.00 7235.00    28.00 44068.00
12.18    69.52    9.46   0.09  62.00
sda1              0.00     0.00    0.00    1.00     0.00     4.00
8.00     0.00    0.00   0.00   0.00
sda2              0.00   322.00    7.00 6018.00    28.00 25360.00
8.43    69.22   11.33   0.08  47.00
sda3              0.00  3460.00    0.00 1222.00     0.00 18728.00
30.65     0.30    0.25   0.25  30.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           9.00    0.00   36.00   22.50    0.00   32.50

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00  1079.00    3.00 11110.00    12.00 49060.00
8.83   120.64   10.95   0.08  86.00
sda1              0.00     2.00    0.00    2.00     0.00   320.00
320.00     0.12   60.00  35.00   7.00
sda2              0.00    30.00    3.00 10739.00    12.00 43076.00
8.02   120.49   11.30   0.08  83.00
sda3              0.00  1047.00    0.00  363.00     0.00  5640.00
31.07     0.03    0.08   0.08   3.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          62.00    0.00   31.00    2.00    0.00    5.00

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00  6267.00   51.00 2493.00   208.00 35040.00
27.71     1.80    0.71   0.12  31.00
sda1              0.00     0.00    0.00    3.00     0.00    12.00
8.00     0.00    0.00   0.00   0.00
sda2              0.00   123.00   51.00  344.00   208.00  1868.00
10.51     1.50    3.80   0.10   4.00
sda3              0.00  6144.00    0.00 2146.00     0.00 33160.00
30.90     0.30    0.14   0.14  30.00


pgsql-performance by date:

Previous
From: Yeb Havinga
Date:
Subject: Re: Testing Sandforce SSD
Next
From: Josh Berkus
Date:
Subject: Re: Pooling in Core WAS: Need help in performance tuning.