Re: LWLock contention: I think I understand the problem - Mailing list pgsql-hackers

From Hannu Krosing
Subject Re: LWLock contention: I think I understand the problem
Date
Msg-id 3C34B71E.2040207@tm.ee
Whole thread Raw
In response to Re: LWLock contention: I think I understand the problem  (Bruce Momjian <pgman@candle.pha.pa.us>)
Responses Re: LWLock contention: I think I understand the problem  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers

Tom Lane wrote:

>>It would be interesting to test pgbench
>>using scaling factors that allowed most of the tables to sit in shared
>>memory buffers.  
>>
Thats why I recommended testing on ram disk ;)

>>Then, we wouldn't be testing disk i/o and would be
>>testing more backend processing throughput.  (Tom, is that true?)
>>
>
>Unfortunately, at low scaling factors pgbench is guaranteed to look
>horrible because of contention for the "branches" rows.  
>
Not really! See graph in my previous post - the database size affects 
performance
much more !

-s 1 is faster than -s 128 for all cases except 7.1.3 where it becomse 
slower when
nr of clients is > 16

>I think that
>it'd be necessary to adjust the ratios of branches, tellers, and
>accounts rows to make it possible to build a small pgbench database
>that didn't show a lot of contention.
>
My understanding is that pgbench is meant to have some level of 
contention and should
be tested up to ( -c = 10 times -s ), as each test client should emulate 
a real "teller" and
there are 10 tellers per -s.

>BTW, I realized over the weekend that the reason performance tails off
>for more clients is that if you hold tx/client constant, more clients
>means more total updates executed, which means more dead rows, which
>means more time spent in unique-index duplicate checks. 
>
Thats the point I tried to make by modifying Tatsuos script to do what 
you describe.
I'm not smart enough to attribute it directly to index lookups but my 
gut feeling told
me that dead tuples must be the culprit ;)

I first tried to counter the slowdown by running a concurrent new-type 
vacuum process
but it made things 2X slower still (38 --> 20 tps for -s 100 with 
original nr for -t )

> We know we want
>to change the way that works, but not for 7.2.  At the moment, the only
>way to make a pgbench run that accurately reflects the impact of
>multiple clients and not the inefficiency of dead index entries is to
>scale tx/client down as #clients increases, so that the total number of
>transactions is the same for all test runs.
>
Yes. My test also showed that the impact of per-client startup costs is 
much smaller
than the impact of increased numer of transactions.

I posted the modified script that does exactly that (512 total transactions
for 1-2-4-8-16-32-64-128 concurrent clients ) about a week ago together 
with a
graph of results.

------------------------
Hannu









pgsql-hackers by date:

Previous
From: Peter Eisentraut
Date:
Subject: Re: More problem with scripts
Next
From: Thomas Lockhart
Date:
Subject: Re: More problem with scripts