Re: Reducing overhead of frequent table locks - Mailing list pgsql-hackers

From Noah Misch
Subject Re: Reducing overhead of frequent table locks
Date
Msg-id 20110514030550.GB22947@tornado.gateway.2wire.net
Whole thread Raw
In response to Re: Reducing overhead of frequent table locks  (Robert Haas <robertmhaas@gmail.com>)
Responses Re: Reducing overhead of frequent table locks  (Robert Haas <robertmhaas@gmail.com>)
List pgsql-hackers
On Fri, May 13, 2011 at 08:55:34PM -0400, Robert Haas wrote:
> On Fri, May 13, 2011 at 4:16 PM, Noah Misch <noah@leadboat.com> wrote:
> > If I'm understanding correctly, your pseudocode would look roughly like this:
> >
> > ? ? ? ?if (level >= ShareUpdateExclusiveLock)

> I think ShareUpdateExclusiveLock should be treated as neither weak nor
> strong.

Indeed; that should be ShareLock.

> It certainly can't be treated as weak - i.e. use the fast
> path - because it's self-conflicting.  It could be treated as strong,
> but since it doesn't conflict with any of the weak lock types, that
> would only serve to prevent fast-path lock acquisitions that otherwise
> could have succeeded.  In particular, it would unnecessarily disable
> fast-path lock acquisition for any relation being vacuumed, which
> could be really ugly considering that one of the main workloads that
> would benefit from something like this is the case where lots of
> backends are fighting over a lock manager partition lock on a table
> they all want to run read and/or modify.  I think it's best for
> ShareUpdateExclusiveLock to always use the regular lock-acquisition
> path, but it need not worry about incrementing strong_lock_counts[] or
> importing local locks in so doing.

Agreed.

> Also, I think in the step just after marker one, we'd only import only
> local locks whose lock tags were equal to the lock tag on which we
> were attempting to acquire a strong lock.  The downside of this whole
> approach is that acquiring a strong lock becomes, at least
> potentially, a lot slower, because you have to scan through the whole
> backend array looking for fast-path locks to import (let's not use the
> term "local lock", which is already in use within the lock manager
> code).  But maybe that can be optimized enough not to matter.  After
> all, if the lock manager scaled perfectly at high concurrency, we
> wouldn't be thinking about this in the first place.

Incidentally, I used the term "local lock" because I assumed fast-path locks
would still go through the lock manager far enough to populate the local lock
table.  But there may be no reason to do so.

> > I wonder if, instead, we could signal all backends at
> > marker 1 to dump the applicable parts of their local (memory) lock tables to
> > files. ?Or to another shared memory region, if that didn't mean statically
> > allocating the largest possible required amount. ?If we were willing to wait
> > until all backends reach a CHECK_FOR_INTERRUPTS, they could instead make the
> > global insertions directly. ?That might yield a decent amount of bug swatting to
> > fill in missing CHECK_FOR_INTERRUPTS, though.
> 
> I've thought about this; I believe it's unworkable.  If one backend
> goes into the tank (think: SIGSTOP, or blocking on I/O to an
> unreadable disk sector) this could lead to cascading failure.

True.  It would need some fairly major advantages to justify that risk, and I
don't see any.


Overall, looks like a promising design sketch to me.  Thanks.

nm


pgsql-hackers by date:

Previous
From: Robert Haas
Date:
Subject: Re: plpgsql doesn't supply typmod for the Params it generates
Next
From: Brar Piening
Date:
Subject: Re: Visual Studio 2010/Windows SDK 7.1 support