Re: reducing the overhead of frequent table locks - now, with WIP patch - Mailing list pgsql-hackers

From Jignesh Shah
Subject Re: reducing the overhead of frequent table locks - now, with WIP patch
Date
Msg-id BANLkTinS=fdZXPzKtUhJQZwMXMJc=rosAQ@mail.gmail.com
Whole thread Raw
In response to Re: reducing the overhead of frequent table locks - now, with WIP patch  (Jignesh Shah <jkshah@gmail.com>)
Responses Re: reducing the overhead of frequent table locks - now, with WIP patch
List pgsql-hackers
On Mon, Jun 6, 2011 at 11:20 PM, Jignesh Shah <jkshah@gmail.com> wrote:

>
> Okay I tried it out with sysbench read scaling test..
> Note I had tried that earlier on 9.0
> http://jkshah.blogspot.com/2010/11/postgresql-90-simple-select-scaling.html
>
> And on that test I found that doing that test on anything bigger than
> 4 cores lead to decreased performance ..
> Redoing the same test with 100 users on 4 vCPU Virtual Machine with
> 8GB with 1M rows I get
>   transactions:                        17870082 (59566.46 per sec.)
> which is inline with the best number on 9.0.
> This test hardly had any idle CPUs.
>
> However where it made a huge impact was doing the same test on my 8
> vCPU VM with 8GB RAM I get
>    transactions:                        33274594 (110914.85 per sec.)
>
> which is a whopping 1.8x scaling for 2x scaling (from 4 to 8 vCPU)..
> My idle cpu was less than 7% which when taken into consideration that
> the "useful" work is line with my expectations is really impressive..
> (And plus the last time I did MySQL they were around 95K or so for the
> same test).
>

> Next step DBT-2..
>


I tried with a warehouse size of 50 all cached in memory and my
initial tests with DBT-2 using 8 vCPU does not show any major changes
for a quick 10 minute run. I did eliminate write bottlenecks for this
test so as to stress on locks (using full_page_writes=off,
synchronous_commit=off, etc). I also have a large enough bufferpool to
fit the all 50 warehouse DB in memory

Without patch  score:      29088 NOTPM
With patch patch score:  30161 NOTPM

It could be that I have other problems in the setup..One of the things
I noticed is that there are too many "Idle in Connections" being
reported which tells me something else is becoming a bottleneck here
:-) I also tested with multiple clients but similar results..  both
postgresql shows multiple idle in transaction and fetch in waiting
while the clients show waiting in SocketCheck.. like shown below for
example.

#0  0x00007fc4e83a43c6 in poll () from /lib64/libc.so.6
#1  0x00007fc4e8abd61a in pqSocketCheck ()
#2  0x00007fc4e8abd730 in pqWaitTimed ()
#3  0x00007fc4e8abc215 in PQgetResult ()
#4  0x00007fc4e8abc398 in PQexecFinish ()
#5  0x00000000004050e1 in execute_new_order ()
#6  0x000000000040374f in process_transaction ()
#7  0x0000000000403519 in db_worker ()


So yes for DBT2 I think this is inconclusive since there still could
be other bottlenecks in play..  (Networking included)
But overall yes I like the sysbench read scaling numbers quite a bit..


Regards,
Jignesh


pgsql-hackers by date:

Previous
From: Greg Smith
Date:
Subject: Re: patch for new feature: Buffer Cache Hibernation
Next
From: Simon Riggs
Date:
Subject: Re: heap vacuum & cleanup locks