Re: H800 + md1200 Performance problem - Mailing list pgsql-performance

From Cesar Martin
Subject Re: H800 + md1200 Performance problem
Date
Msg-id CAMAsR=637=zAtOSxGF=JH9apfOtU7FaUXR27+J2syzE_c-rdYw@mail.gmail.com
Whole thread Raw
In response to Re: H800 + md1200 Performance problem  (Tomas Vondra <tv@fuzzy.cz>)
Responses Re: H800 + md1200 Performance problem
List pgsql-performance
Yes, setting is the same in both machines. 

The results of bonnie++ running without arguments are:

Version      1.96   ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
cltbbdd01      126G    94  99 202873  99 208327  95  1639  91 819392  88  2131 139
Latency             88144us     228ms     338ms     171ms     147ms   20325us
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files:max:min        /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
cltbbdd01        16  8063  26 +++++ +++ 27361  96 31437  96 +++++ +++ +++++ +++
Latency              7850us    2290us    2310us     530us      11us     522us

With DD, one core of CPU put at 100% and results are  about 100-170 MBps, that I thing is bad result for this HW:

dd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=100
100+0 records in
100+0 records out
838860800 bytes (839 MB) copied, 8,1822 s, 103 MB/s

dd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=1000 conv=fdatasync
1000+0 records in
1000+0 records out
8388608000 bytes (8,4 GB) copied, 50,8388 s, 165 MB/s

dd if=/dev/zero of=/vol02/bonnie/DD bs=1M count=1024 conv=fdatasync
1024+0 records in
1024+0 records out
1073741824 bytes (1,1 GB) copied, 7,39628 s, 145 MB/s

When monitor I/O activity with iostat, during dd, I have noticed that, if the test takes 10 second, the disk have activity only during last 3 or 4 seconds and iostat report about 250-350MBps. Is it normal?

I set read ahead to different values, but the results don't differ substantially...

Thanks!

El 3 de abril de 2012 15:21, Tomas Vondra <tv@fuzzy.cz> escribió:
On 3.4.2012 14:59, Cesar Martin wrote:
> Hi Mike,
> Thank you for your fast response.
>
> blockdev --getra /dev/sdc
> 256

That's way too low. Is this setting the same on both machines?

Anyway, set it to 4096, 8192 or even 16384 and check the difference.

BTW explain analyze is nice, but it's only half the info, especially
when the issue is outside PostgreSQL (hw, OS, ...). Please, provide
samples from iostat / vmstat or tools like that.

Tomas

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance



--
César Martín Pérez
cmartinp@gmail.com

pgsql-performance by date:

Previous
From: Claudio Freire
Date:
Subject: Re: TCP Overhead on Local Loopback
Next
From: Cesar Martin
Date:
Subject: Re: H800 + md1200 Performance problem