Re: Move PinBuffer and UnpinBuffer to atomics - Mailing list pgsql-hackers

From Alexander Korotkov
Subject Re: Move PinBuffer and UnpinBuffer to atomics
Date
Msg-id CAPpHfdvcQnPQU3_KwQHNmngsQFfihx6b21KsW6LACnHhoXW_bQ@mail.gmail.com
Whole thread Raw
In response to Re: Move PinBuffer and UnpinBuffer to atomics  (Dilip Kumar <dilipbalaut@gmail.com>)
Responses Re: Move PinBuffer and UnpinBuffer to atomics
List pgsql-hackers
On Tue, Apr 5, 2016 at 10:26 AM, Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Mon, Apr 4, 2016 at 2:28 PM, Andres Freund <andres@anarazel.de> wrote:
Hm, interesting. I suspect that's because of the missing backoff in my
experimental patch. If you apply the attached patch ontop of that
(requires infrastructure from pinunpin), how does performance develop?

I have applied this patch also, but still results are same, I mean around 550,000 with 64 threads and 650,000 with 128 client with lot of fluctuations..

128 client     (head+0001-WIP-Avoid-the-use-of-a-separate-spinlock-to-protect +pinunpin-cas-9+backoff)

run1 645769
run2 643161
run3 285546
run4 289421
run5 630772
run6 284363

Could the reason be that we're increasing concurrency for LWLock state atomic variable by placing queue spinlock there?
But I wonder why this could happen during "pgbench -S", because it doesn't seem to have high traffic of exclusive LWLocks.

------
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company 

pgsql-hackers by date:

Previous
From: Robert Haas
Date:
Subject: Re: Combining Aggregates
Next
From: Alvaro Herrera
Date:
Subject: Re: Yet another small patch - reorderbuffer.c:1099