Re: Bad iostat numbers - Mailing list pgsql-performance

From Mark Kirkwood
Subject Re: Bad iostat numbers
Date
Msg-id 456F89AC.9070207@paradise.net.nz
Whole thread Raw
In response to Bad iostat numbers  ("Carlos H. Reimer" <carlos.reimer@opendb.com.br>)
Responses RES: Bad iostat numbers
List pgsql-performance
Carlos H. Reimer wrote:
> While collecting performance data I discovered very bad numbers in the
> I/O subsystem and I would like to know if I´m thinking correctly.
>
> Here is a typical iostat -x:
>
>
> avg-cpu:  %user   %nice %system %iowait   %idle
>
>           50.40    0.00    0.50    1.10   48.00
>
>
>
> Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s
> avgrq-sz avgqu-sz   await  svctm  %util
>
> sda          0.00   7.80  0.40  6.40   41.60  113.60    20.80
> 56.80    22.82 570697.50   10.59 147.06 100.00
>
> sdb          0.20   7.80  0.60  6.40   40.00  113.60    20.00
> 56.80    21.94 570697.50    9.83 142.86 100.00
>
> md1          0.00   0.00  1.20 13.40   81.60  107.20    40.80
> 53.60    12.93     0.00    0.00   0.00   0.00
>
> md0          0.00   0.00  0.00  0.00    0.00    0.00     0.00
> 0.00     0.00     0.00    0.00   0.00   0.00
>
>
>
> Are they not saturated?
>

They look it (if I'm reading your typical numbers correctly) - %util 100
and svctime in the region of 100 ms!

On the face of it, looks like you need something better than a RAID1
setup - probably RAID10 (RAID5 is probably no good as you are writing
more than you are reading it seems). However read on...

If this is a sudden change in system behavior, then it is probably worth
trying to figure out what is causing it (i.e which queries) - for
instance it might be that you have some new queries that are doing disk
based sorts (this would mean you really need more memory rather than
better disk...)

Cheers

Mark



pgsql-performance by date:

Previous
From: Tobias Brox
Date:
Subject: Re: Defining performance.
Next
From: David Boreham
Date:
Subject: Re: Bad iostat numbers