Re: Reducing overhead of frequent table locks - Mailing list pgsql-hackers

From Noah Misch
Subject Re: Reducing overhead of frequent table locks
Date
Msg-id 20110524163426.GD21833@tornado.gateway.2wire.net
Whole thread Raw
In response to Re: Reducing overhead of frequent table locks  (Robert Haas <robertmhaas@gmail.com>)
Responses Re: Reducing overhead of frequent table locks  (Robert Haas <robertmhaas@gmail.com>)
List pgsql-hackers
On Tue, May 24, 2011 at 11:52:54AM -0400, Robert Haas wrote:
> On Tue, May 24, 2011 at 11:38 AM, Noah Misch <noah@leadboat.com> wrote:
> >> Another random idea for optimization: we could have a lock-free array
> >> with one entry per backend, indicating whether any fast-path locks are
> >> present. ?Before acquiring its first fast-path lock, a backend writes
> >> a 1 into that array and inserts a store fence. ?After releasing its
> >> last fast-path lock, it performs a store fence and writes a 0 into the
> >> array. ?Anyone who needs to grovel through all the per-backend
> >> fast-path arrays for whatever reason can perform a load fence and then
> >> scan the array. ?If I understand how this stuff works (and it's very
> >> possible that I don't), when the scanning backend sees a 0, it can be
> >> assured that the target backend has no fast-path locks and therefore
> >> doesn't need to acquire and release that LWLock or scan that fast-path
> >> array for entries.
> >
> > I'm probably just missing something, but can't that conclusion become obsolete
> > arbitrarily quickly? ?What if the scanning backend sees a 0, and the subject
> > backend is currently sleeping just before it would have bumped that value? ?We
> > need to take the LWLock is there's any chance that the subject backend has not
> > yet seen the scanning backend's strong_lock_counts[] update.
> 
> Can't we bump strong_lock_counts[] *first*, make sure that change is
> globally visible, and only then start scanning the array?
> 
> Once we've bumped strong_lock_counts[] and made sure everyone can see
> that change, it's still possible for backends to take a fast-path lock
> in some *other* fast-path partition, but nobody should be able to add
> any more fast-path locks in the partition we care about after that
> point.

There's a potentially-unbounded delay between when the subject backend reads
strong_lock_counts[] and when it sets its fast-path-used flag.  (I didn't mean
"not yet seen" in the sense that some memory load would not show the latest
value.  I just meant that the subject backend may still be taking relevant
actions based on its previous load of the value.)  We could have the subject
set its fast-path-used flag before even checking strong_lock_counts[], then
clear the flag when strong_lock_counts[] dissuaded it from proceeding.  Maybe
that's what you had in mind?

That being said, it's a slight extra cost for all fast-path lockers to benefit
the strong lockers, so I'm not prepared to guess whether it will pay off.


pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: Pull up aggregate subquery
Next
From: Merlin Moncure
Date:
Subject: Re: Domains versus polymorphic functions, redux