Re: Move PinBuffer and UnpinBuffer to atomics - Mailing list pgsql-hackers

From Andres Freund
Subject Re: Move PinBuffer and UnpinBuffer to atomics
Date
Msg-id 20160406104352.5bn3ehkcsceja65c@alap3.anarazel.de
Whole thread Raw
In response to Re: Move PinBuffer and UnpinBuffer to atomics  (Andres Freund <andres@anarazel.de>)
List pgsql-hackers
On 2016-04-06 11:52:28 +0200, Andres Freund wrote:
> Hi,
> 
> On 2016-04-03 16:47:49 +0530, Dilip Kumar wrote:
> 
> > Summary Of the Run:
> > -----------------------------
> > 1. Throughout one run if we observe TPS every 30 seconds its stable in one
> > run.
> > 2. With Head 64 client run vary between ~250,000 to ~450000. you can see
> > below results.
> > 
> > run1: 434860    (5min)
> > run2: 275815    (5min)
> > run3: 437872    (5min)
> > run4: 237033   (5min)
> > run5: 347611    (10min)
> > run6: 435933   (20min)
> 
> > > [1] -
> > > http://www.postgresql.org/message-id/20160401083518.GE9074@awork2.anarazel.de
> > >
> > 
> > 
> > Non Default Parameter:
> > --------------------------------
> > shared_buffer 8GB
> > Max Connections 150
> > 
> > ./pgbench -c $threads -j $threads -T 1200 -M prepared -S -P 30 postgres
> 
> Which scale did you initialize with? I'm trying to reproduce the
> workload on hydra as precisely as possible...

On hydra, even after a fair amount of tinkering, I cannot reprodue such
variability.

Pin/Unpin only has a minor effect itself, reducing the size of lwlock
on-top improves performance (378829 to 409914 TPS) in
intentionally short 15s tests (warm cache) with 128 clients; as well as
to a lower degree in 120s tests (415921 to 420487).

It appears, over 5 runs, that the alignment fix shortens the rampup
phase from about 100s to about 8; interesting.

Andres



pgsql-hackers by date:

Previous
From: Andres Freund
Date:
Subject: Re: Yet another small patch - reorderbuffer.c:1099
Next
From: Andres Freund
Date:
Subject: Detrimental performance impact of ringbuffers on performance