Re: Comparative tps question - Mailing list pgsql-performance

From Merlin Moncure
Subject Re: Comparative tps question
Date
Msg-id CAHyXU0zqN=xaRLM_uKZAfKWFGnSK6ROKfPMgE0rhmQCORoOD2A@mail.gmail.com
Whole thread Raw
In response to Re: Comparative tps question  (John Lister <john.lister@kickstone.com>)
Responses Re: Comparative tps question
Re: Comparative tps question
List pgsql-performance
On Thu, Nov 29, 2012 at 10:56 AM, John Lister <john.lister@kickstone.com> wrote:
> I must have misread the numbers before when using bonnie++, run it again and
> getting 1.3Gb/s read and 700Mb/s write which looks more promising. In terms
> of vmstat:

pretty nice.

>> *) Very first thing we need to check is if we are storage bound (check i/o
>> wait) and if so where the bind up is. Could be on the wal or heap volume.
>> Another possibility is that we're lock bound which is a completely different
>> issue to deal with. so we want to see top, iostat, vmstat, etc while test is
>> happening.
>
> io_wait is typically <20% which is worse than for bonnie.
> vmstat typical figures are during pgbench are
> procs -----------memory---------- ---swap-- -----io---- -system--
> ----cpu----
>  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id
> wa
> 30  1      0 261900  44340 62650808    0    0 88348 74500 103544 175006 53
> 20 21  6
>
> and iostat (sda is the wal device)
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>           52.80    0.00   17.94   12.22    0.00   17.03
>
> Device:            tps    kB_read/s    kB_wrtn/s    kB_read kB_wrtn
> sda            2544.00         0.00     66432.00          0 132864
> sdc            4153.00    132848.00       136.00     265696 272
>
> I noticed that the system values are usually in the 20% region, could this
> be the locks? btw pgbench is running on the db server not a client - would
> that influence things dramatically.

Since we have some idle cpu% here we can probably eliminate pgbench as
a bottleneck by messing around with the -j switch.  another thing we
want to test is the "-N" switch -- this doesn't update the tellers and
branches table which in high concurrency situations can bind you from
locking perspective.

one thing that immediately jumps out here is that your wal volume
could be holding you up.  so it's possible we may want to move wal to
the ssd volume.  if you can scrounge up a 9.2 pgbench, we can gather
more evidence for that by running pgbench with the
"--unlogged-tables" option, which creates the tables unlogged so that
they are not wal logged (for the record, this causes tables to be
truncated when not shut down in clean state).

putting all the options above together (history only, no wal, multi
thread) and you're test is more approximating random device write
performance.

>> *) another interesting test to run is large scaling factor (ideally, at
>> least 2x ram) read only test via pgbench -S. merlin
>
> Might give that a go when I next get a chance to run the tests...

yeah -- this will tell us raw seek performance of ssd volume which
presumably will be stupendous.  2x is minimum btw 10x would be more
appropriate.

since you're building a beast, other settings to explore are numa
(http://frosty-postgres.blogspot.com/2012/08/postgresql-numa-and-zone-reclaim-mode.html)
and dell memory bios settings that are occasionally set from the
factory badly (see here:
http://bleything.net/articles/postgresql-benchmarking-memory.html).

merlin


pgsql-performance by date:

Previous
From: John Lister
Date:
Subject: Re: Comparative tps question
Next
From: Mike Blackwell
Date:
Subject: Re: Savepoints in transactions for speed?