Re: Wait free LW_SHARED acquisition - Mailing list pgsql-hackers

From Andres Freund
Subject Re: Wait free LW_SHARED acquisition
Date
Msg-id 20130927075707.GB5588@awork2.anarazel.de
Whole thread Raw
In response to Re: Wait free LW_SHARED acquisition  (Andres Freund <andres@2ndquadrant.com>)
Responses Re: Wait free LW_SHARED acquisition
Re: Wait free LW_SHARED acquisition
List pgsql-hackers
On 2013-09-27 09:21:05 +0200, Andres Freund wrote:
> > >So the goal is to have LWLockAcquire(LW_SHARED) never block unless
> > >somebody else holds an exclusive lock. To produce enough appetite for
> > >reading the rest of the long mail, here's some numbers on a
> > >pgbench -j 90 -c 90 -T 60 -S (-i -s 10) on a 4xE5-4620
> > >
> > >master + padding: tps = 146904.451764
> > >master + padding + lwlock: tps = 590445.927065
> > 
> > How does that compare with simply increasing NUM_BUFFER_PARTITIONS?
> 
> Heaps better. In the case causing this investigation lots of the pages
> with hot spinlocks were the simply the same ones over and over again,
> partitioning the lockspace won't help much there.
> That's not exactly an uncommon scenario since often enough there's a
> small amount of data hit very frequently and lots more that is accessed
> only infrequently. E.g. recently inserted data and such tends to be very hot.
> 
> I can run a test on the 4 socket machine if it's unused, but on my 2
> socket workstation the benefits of at least our simulation of the
> original workloads the improvements were marginal after increasing the
> padding to a full cacheline.

Ok, was free:

padding + 16 partitions:
tps = 147884.648416

padding + 32 partitions:
tps = 141777.841125

padding + 64 partitions:
tps = 141561.539790

padding + 16 partitions + new lwlocks
tps = 601895.580903 (yeha, still reproduces after some sleep!)


Now, the other numbers were best-of-three, these aren't, but I think
it's pretty clear that you're not going to see the same benefits. I am
not surprised...
The current implementation of lwlocks will frequently block others, both
during acquiration and release of locks. What's even worse, trying to
fruitlessly acquire a spinlock will often prevent releasing it because
we need the spinlock during release.
With the proposed algorithm, even if we need the spinlock during release
of an lwlock because there are queued PGPROCs, we will acquire that
spinlock only after already having released the lock...

Greetings,

Andres Freund

-- Andres Freund                       http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training &
Services



pgsql-hackers by date:

Previous
From: Andres Freund
Date:
Subject: Re: Wait free LW_SHARED acquisition
Next
From: Sawada Masahiko
Date:
Subject: Re: Patch for fail-back without fresh backup