Re: Reasoning behind LWLOCK_PADDED_SIZE/increase it to a full cacheline - Mailing list pgsql-hackers

From Robert Haas
Subject Re: Reasoning behind LWLOCK_PADDED_SIZE/increase it to a full cacheline
Date
Msg-id CA+TgmoZK0FHxNx21yaM1yW=eR-j1dssahM7h3zWj6TB+WwA=Zw@mail.gmail.com
Whole thread Raw
In response to Re: Reasoning behind LWLOCK_PADDED_SIZE/increase it to a full cacheline  (Andres Freund <andres@2ndquadrant.com>)
List pgsql-hackers
On Tue, Sep 24, 2013 at 6:48 AM, Andres Freund <andres@2ndquadrant.com> wrote:
> On 2013-09-24 12:39:39 +0200, Tom Lane wrote:
>> Andres Freund <andres@2ndquadrant.com> writes:
>> > So, what we do is we guarantee that LWLocks are aligned to 16 or 32byte
>> > boundaries. That means that on x86-64 (64byte cachelines, 24bytes
>> > unpadded lwlock) two lwlocks share a cacheline.
>
>> > In my benchmarks changing the padding to 64byte increases performance in
>> > workloads with contended lwlocks considerably.
>>
>> At a huge cost in RAM.  Remember we make two LWLocks per shared buffer.
>
>> I think that rather than using a blunt instrument like that, we ought to
>> see if we can identify pairs of hot LWLocks and make sure they're not
>> adjacent.
>
> That's a good point. What about making all but the shared buffer lwlocks
> 64bytes? It seems hard to analyze the interactions between all the locks
> and keep it maintained.

I think somebody had a patch a few years ago that made it so that the
LWLocks didn't have to be in a single array, but could instead be
anywhere in shared memory.  Internally, lwlock.c held onto LWLock
pointers instead of LWLockIds.  That idea seems like it might be worth
revisiting, in terms of giving us more options as to how LWLocks can
be laid out in shared memory.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



pgsql-hackers by date:

Previous
From: Chris Travers
Date:
Subject: Re: 9.3 Json & Array's
Next
From: Robert Haas
Date:
Subject: Re: FW: REVIEW: Allow formatting in log_line_prefix