Re: Problems with pg_locks explosion - Mailing list pgsql-performance
From | Armand du Plessis |
---|---|
Subject | Re: Problems with pg_locks explosion |
Date | |
Msg-id | CANf99sWny5zddcbca3+0MfM4nUvK6tVG+AmO_Gtyo-rqOYRpZg@mail.gmail.com Whole thread Raw |
In response to | Re: Problems with pg_locks explosion (Mark Kirkwood <mark.kirkwood@catalyst.net.nz>) |
Responses |
Re: Problems with pg_locks explosion
|
List | pgsql-performance |
I had my reservations about my almost 0% IO usage on the raid0 array as well. I'm looking at the numbers in atop and it doesn't seem to reflect the aggregate of the volumes as one would expect. I'm just happy I am seeing numbers on the volumes, they're not too bad.
One thing I was wondering, as a last possible IO resort. Provisioned EBS volumes requires that you maintain a wait queue of 1 for every 200 provisioned IOPS to get reliable IO. My wait queue hovers between 0-1 and with the 1000 IOPS it should be 5. Even thought about artificially pushing more IO to the volumes but I think Jeff's right, there's some internal kernel voodoo at play here. I have a feeling it'll be under control with pg_pool (if I can just get the friggen setup there right) and then I'll have more time to dig into it deeper.
Apologies to the kittens for the interrupting your leave :)
On Tue, Apr 2, 2013 at 8:25 AM, Mark Kirkwood <mark.kirkwood@catalyst.net.nz> wrote:
Hi Jeff,On 02/04/13 19:08, Jeff Janes wrote:On Monday, April 1, 2013, Mark Kirkwood wrote:
Your provisioned volumes are much better than the default AWS ones,
but are still not hugely fast (i.e 1000 IOPS is about 8 MB/s worth
of Postgres 8k buffers). So you may need to look at adding more
volumes into the array, or adding some separate ones and putting
pg_xlog directory on 'em.
However before making changes I would recommend using iostat or sar
to monitor how volumes are handling the load (I usually choose a 1
sec granularity and look for 100% util and high - server hundred ms
- awaits). Also iotop could be enlightening.
Hi Mark,
Do you have experience using these tools with AWS? When using non-DAS
in other contexts, I've noticed that these tools often give deranged
results, because the kernel doesn't correctly know what time to
attribute to "network" and what to attribute to "disk". But I haven't
looked into it on AWS EBS, maybe they do a better job there.
Thanks for any insight,
That is a very good point. I did notice a reasonable amount of network traffic on the graphs posted previously, along with a suspiciously low amount of IO for md127 (which I assume is the raid0 array)...and wondered if iostat was not seeing IO fully, however it slipped my mind (I am on leave with kittens - so claim that for the purrrfect excuse)!
However I don't recall there being a problem with the io tools for standard EBS volumes - but I haven't benchmarked AWS for a over a year, so things could be different now - and I have no experience with these new provisioned volumes.
Armand - it might be instructive to do some benchmarking (with another host and volume set) where you do something like:
$ dd if=/dev/zero of=file bs=8k count=1000000
and see if iostat and friends actually show you doing IO as expected!
pgsql-performance by date: