Re: Speed up Clog Access by increasing CLOG buffers - Mailing list pgsql-hackers

From Amit Kapila
Subject Re: Speed up Clog Access by increasing CLOG buffers
Date
Msg-id CAA4eK1JBbYKUWXzzrrcRnPoChB_Tu2-fYt4aW41ADDfETwTVhg@mail.gmail.com
Whole thread Raw
In response to Re: Speed up Clog Access by increasing CLOG buffers  (Robert Haas <robertmhaas@gmail.com>)
Responses Re: Speed up Clog Access by increasing CLOG buffers  (Tomas Vondra <tomas.vondra@2ndquadrant.com>)
List pgsql-hackers
On Fri, Oct 21, 2016 at 6:31 AM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Thu, Oct 20, 2016 at 4:04 PM, Tomas Vondra
> <tomas.vondra@2ndquadrant.com> wrote:
>>> I then started a run at 96 clients which I accidentally killed shortly
>>> before it was scheduled to finish, but the results are not much
>>> different; there is no hint of the runaway CLogControlLock contention
>>> that Dilip sees on power2.
>>>
>> What shared_buffer size were you using? I assume the data set fit into
>> shared buffers, right?
>
> 8GB.
>
>> FWIW as I explained in the lengthy post earlier today, I can actually
>> reproduce the significant CLogControlLock contention (and the patches do
>> reduce it), even on x86_64.
>
> /me goes back, rereads post.  Sorry, I didn't look at this carefully
> the first time.
>
>> For example consider these two tests:
>>
>> * http://tvondra.bitbucket.org/#dilip-300-unlogged-sync
>> * http://tvondra.bitbucket.org/#pgbench-300-unlogged-sync-skip
>>
>> However, it seems I can also reproduce fairly bad regressions, like for
>> example this case with data set exceeding shared_buffers:
>>
>> * http://tvondra.bitbucket.org/#pgbench-3000-unlogged-sync-skip
>
> I'm not sure how seriously we should take the regressions.  I mean,
> what I see there is that CLogControlLock contention goes down by about
> 50% -- which is the point of the patch -- and WALWriteLock contention
> goes up dramatically -- which sucks, but can't really be blamed on the
> patch except in the indirect sense that a backend can't spend much
> time waiting for A if it's already spending all of its time waiting
> for B.
>

Right, I think not only WALWriteLock, but contention on other locks
also goes up as you can see in below table.  I think there is nothing
much we can do for that with this patch.  One thing which is unclear
is why on unlogged tests it is showing WALWriteLock?

                     test                       | clients |
wait_event_type |      wait_event      | master  | granular_locking |
no_content_lock | group_update

--------------------------------------------------+---------+-----------------+----------------------+---------+------------------+-----------------+--------------

pgbench-3000-unlogged-sync-skip                  |      72 |
LWLockNamed     | CLogControlLock      |  217012 |            37326 |        32288 |        12040
pgbench-3000-unlogged-sync-skip                  |      72 |
LWLockNamed     | WALWriteLock         |   13188 |           104183 |       123359 |       103267
pgbench-3000-unlogged-sync-skip                  |      72 |
LWLockTranche   | buffer_content       |   10532 |            65880 |        57007 |        86176
pgbench-3000-unlogged-sync-skip                  |      72 |
LWLockTranche   | wal_insert           |    9280 |            85917 |       109472 |        99609
pgbench-3000-unlogged-sync-skip                  |      72 |
LWLockTranche   | clog                 |    4623 |            25692 |        10422 |        11755




-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



pgsql-hackers by date:

Previous
From: Ashutosh Bapat
Date:
Subject: Re: Transactions involving multiple postgres foreign servers
Next
From: Tomas Vondra
Date:
Subject: Re: Speed up Clog Access by increasing CLOG buffers