Re: Move PinBuffer and UnpinBuffer to atomics - Mailing list pgsql-hackers

From Amit Kapila
Subject Re: Move PinBuffer and UnpinBuffer to atomics
Date
Msg-id CAA4eK1LVWd=rDv005iChVruv65tVY-BjivOeYKsR-nL7G=MJWA@mail.gmail.com
Whole thread Raw
In response to Re: Move PinBuffer and UnpinBuffer to atomics  (Amit Kapila <amit.kapila16@gmail.com>)
List pgsql-hackers
On Sun, Apr 10, 2016 at 6:15 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:
On Sun, Apr 10, 2016 at 11:10 AM, Alexander Korotkov <a.korotkov@postgrespro.ru> wrote:
On Sun, Apr 10, 2016 at 7:26 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
On Sun, Apr 10, 2016 at 1:13 AM, Andres Freund <andres@anarazel.de> wrote:
On 2016-04-09 22:38:31 +0300, Alexander Korotkov wrote:
> There are results with 5364b357 reverted.


What exactly is this test?
I think assuming it is a read-only -M prepared pgbench run where data fits in shared buffers.  However if you can share exact details, then I can try the similar test.

Yes, the test is:

pgbench -s 1000 -c $clients -j 100 -M prepared -S -T 300 (shared_buffers=24GB)

 
Crazy that this has such a negative impact. Amit, can you reproduce
that?

I will try it.

Good.

Okay, I have done some performance testing of read-only tests with configuration suggested by you to see the impact

pin_unpin - latest version of pin unpin patch on top of HEAD.
pin_unpin_clog_32 - pin_unpin + change clog buffers to 32

Client_Count/Patch_ver64128
pin_unpin330280133586
pin_unpin_clog_32388244132388


This shows that at 64 client count, the performance is better with 32 clog buffers.  However, I think this is more attributed towards the fact that contention seems to shifted to procarraylock as to an extent indicated in Alexandar's mail.  I will try once with cache the snapshot patch as well and with clog buffers as 64.


I went ahead and tried with Cache the snapshot patch and with clog buffers as 64 and below is performance data:

Description of patches

pin_unpin - latest version of pin unpin patch on top of HEAD.
pin_unpin_clog_32 - pin_unpin + change clog buffers to 32
pin_unpin_cache_snapshot - pin_unpin + Cache the snapshot
pin_unpin_clog_64 - pin_unpin + change clog buffers to 64


Client_Count/Patch_ver64128
pin_unpin330280133586
pin_unpin_clog_32388244132388
pin_unpin_cache_snapshot412149144799
pin_unpin_clog_64391472132951


Above data seems to indicate that cache the snapshot patch will make performance go further up with clog buffers as 128 (HEAD).  I will take the performance data with pin_unpin + clog buffers as 32 + cache the snapshot, but above seems a good enough indication that making clog buffers as 128 is a good move considering we will one day improve GetSnapshotData either by Cache the snapshot technique or some other way.   Also making clog buffers as 64 instead of 128 seems to address the regression (at the very least in above tests), but for read-write performance, clog buffers as 128 has better numbers, though the difference between 64 and 128 is not very high.


With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

pgsql-hackers by date:

Previous
From: Christian Ullrich
Date:
Subject: Re: VS 2015 support in src/tools/msvc
Next
From: Petr Jelinek
Date:
Subject: Re: VS 2015 support in src/tools/msvc