Re: H800 + md1200 Performance problem - Mailing list pgsql-performance

From Merlin Moncure
Subject Re: H800 + md1200 Performance problem
Date
Msg-id CAHyXU0zwVaHqQTftthtUdXz6-iBHwCO3_mgoq3kokpcbFO3e8g@mail.gmail.com
Whole thread Raw
In response to Re: H800 + md1200 Performance problem  (Tomas Vondra <tv@fuzzy.cz>)
List pgsql-performance
On Tue, Apr 3, 2012 at 1:01 PM, Tomas Vondra <tv@fuzzy.cz> wrote:
> On 3.4.2012 17:42, Cesar Martin wrote:
>> Yes, setting is the same in both machines.
>>
>> The results of bonnie++ running without arguments are:
>>
>> Version      1.96   ------Sequential Output------ --Sequential Input-
>> --Random-
>>                     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
>> --Seeks--
>> Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
>>  /sec %CP
>> cltbbdd01      126G    94  99 202873  99 208327  95  1639  91 819392  88
>>  2131 139
>> Latency             88144us     228ms     338ms     171ms     147ms
>> 20325us
>>                     ------Sequential Create------ --------Random
>> Create--------
>>                     -Create-- --Read--- -Delete-- -Create-- --Read---
>> -Delete--
>> files:max:min        /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
>>  /sec %CP
>> cltbbdd01        16  8063  26 +++++ +++ 27361  96 31437  96 +++++ +++
>> +++++ +++
>> Latency              7850us    2290us    2310us     530us      11us
>> 522us
>>
>> With DD, one core of CPU put at 100% and results are  about 100-170
>> MBps, that I thing is bad result for this HW:
>>
>> dd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=100
>> 100+0 records in
>> 100+0 records out
>> 838860800 bytes (839 MB) copied, 8,1822 s, 103 MB/s
>>
>> dd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=1000 conv=fdatasync
>> 1000+0 records in
>> 1000+0 records out
>> 8388608000 bytes (8,4 GB) copied, 50,8388 s, 165 MB/s
>>
>> dd if=/dev/zero of=/vol02/bonnie/DD bs=1M count=1024 conv=fdatasync
>> 1024+0 records in
>> 1024+0 records out
>> 1073741824 bytes (1,1 GB) copied, 7,39628 s, 145 MB/s
>>
>> When monitor I/O activity with iostat, during dd, I have noticed that,
>> if the test takes 10 second, the disk have activity only during last 3
>> or 4 seconds and iostat report about 250-350MBps. Is it normal?
>
> Well, you're testing writing, and the default behavior is to write the
> data into page cache. And you do have 64GB of RAM so the write cache may
> take large portion of the RAM - even gigabytes. To really test the I/O
> you need to (a) write about 2x the amount of RAM or (b) tune the
> dirty_ratio/dirty_background_ratio accordingly.
>
> BTW what are you trying to achieve with "conv=fdatasync" at the end. My
> dd man page does not mention 'fdatasync' and IMHO it's a mistake on your
> side. If you want to sync the data at the end, then you need to do
> something like
>
>   time sh -c "dd ... && sync"
>
>> I set read ahead to different values, but the results don't differ
>> substantially...
>
> Because read-ahead is for reading (which is what a SELECT does most of
> the time), but the dests above are writing to the device. And writing is
> not influenced by read-ahead.

Yeah, but I have to agree with Cesar -- that's pretty unspectacular
results for 12 drive sas array to say the least (unless the way dd was
being run was throwing it off somehow).  Something is definitely not
right here.  Maybe we can see similar tests run on the production
server as a point of comparison?

merlin

pgsql-performance by date:

Previous
From: Tomas Vondra
Date:
Subject: Re: H800 + md1200 Performance problem
Next
From: Merlin Moncure
Date:
Subject: Re: Update join performance issues