Re: Separating Buffer LWlocks - Mailing list pgsql-hackers

From Andres Freund
Subject Re: Separating Buffer LWlocks
Date
Msg-id 20150907175909.GD5084@alap3.anarazel.de
Whole thread Raw
In response to Re: Separating Buffer LWlocks  (Andres Freund <andres@anarazel.de>)
Responses Re: Separating Buffer LWlocks
List pgsql-hackers
On 2015-09-06 15:28:40 +0200, Andres Freund wrote:
> Hm. I found that the buffer content lwlocks can actually also be a
> significant source of contention - I'm not sure reducing padding for
> those is going to be particularly nice. I think we should rather move
> the *content* lock inline into the buffer descriptor. The io lock
> doesn't matter and can be as small as possible.

POC patch along those lines attached. This way the content locks have
full 64byte alignment *without* any additional memory usage because
buffer descriptors are already padded to 64bytes.  I'd to reorder
BufferDesc contents a bit and reduce the width of usagecount to 8bit
(which is fine given that 5 is our highest value) to make enough room.

I've experimented reducing the padding of the IO locks to nothing since
they're not that often contended on the CPU level. But even on my laptop
that lead to a noticeable regression for a readonly pgbench workload
where the dataset fit into the OS page cache but not into s_b.

> Additionally I think we should increase the lwlock padding to 64byte
> (i.e. the by far most command cacheline size). In the past I've seen
> that to be rather beneficial.

You'd already done that...


Benchmarking this on my 4 core/8 threads laptop I see a very slight
performance increase - which is about what we expect since this really
only should affect multi-socket machines.

Greetings,

Andres Freund

Attachment

pgsql-hackers by date:

Previous
From: Andres Freund
Date:
Subject: Re: Speed up Clog Access by increasing CLOG buffers
Next
From: Petr Jelinek
Date:
Subject: Re: WIP: Rework access method interface