Re: Spinlocks, yet again: analysis and proposed patches - Mailing list pgsql-hackers

From Gavin Sherry
Subject Re: Spinlocks, yet again: analysis and proposed patches
Date
Msg-id Pine.LNX.4.58.0509160952580.22114@linuxworld.com.au
Whole thread Raw
In response to Re: Spinlocks, yet again: analysis and proposed patches  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: Spinlocks, yet again: analysis and proposed patches
List pgsql-hackers
On Thu, 15 Sep 2005, Tom Lane wrote:

> One thing that did seem to help a little bit was padding the LWLocks
> to 32 bytes (by default they are 24 bytes each on x86_64) and ensuring
> the array starts on a 32-byte boundary.  This ensures that we won't have
> any LWLocks crossing cache lines --- contended access to such an LWLock
> would probably incur the sort of large penalty seen above, because you'd
> be trading two cache lines back and forth not one.  It seems that the
> important locks are not split that way in CVS tip, because the gain
> wasn't much, but I wonder whether some effect like this might explain
> some of the unexplainable performance changes we've noticed in the past
> (eg, in the dbt2 results).  A seemingly unrelated small change in the
> size of other data structures in shared memory might move things around
> enough to make a performance-critical lock cross a cache line boundary.

What about padding the LWLock to 64 bytes on these architectures. Both P4
and Opteron have 64 byte cache lines, IIRC. This would ensure that a
cacheline doesn't hold two LWLocks.

Gavin


pgsql-hackers by date:

Previous
From: Darcy Buskermolen
Date:
Subject: Re: US Census database (Tiger 2004FE) - 4.4G
Next
From: "Simon Riggs"
Date:
Subject: Re: Spinlocks, yet again: analysis and proposed patches