Re: bug in fast-path locking - Mailing list pgsql-hackers

From Robert Haas
Subject Re: bug in fast-path locking
Date
Msg-id CA+Tgmoa-7+UDsyZr==+RKCr8FVXMx1CAfZwc42H9yeAkqiPqhg@mail.gmail.com
Whole thread Raw
In response to Re: bug in fast-path locking  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: bug in fast-path locking  (Jeff Davis <pgsql@j-davis.com>)
List pgsql-hackers
On Mon, Apr 9, 2012 at 2:42 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Robert Haas <robertmhaas@gmail.com> writes:
>> On Mon, Apr 9, 2012 at 1:49 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>>> Haven't looked at the code, but maybe it'd be better to not bump the
>>> strong lock count in the first place until the final step of updating
>>> the lock tables?
>
>> Well, unfortunately, that would break the entire mechanism.  The idea
>> is that we bump the strong lock count first.  That prevents anyone
>> from taking any more fast-path locks on the target relation.  Then, we
>> go through and find any existing fast-path locks that have already
>> been taken, and turn them into regular locks.  Finally, we resolve the
>> actual lock request and either grant the lock or block, depending on
>> whether conflicts exist.
>
> OK.  (Is that explained somewhere in the comments?  I confess I've not
> paid any attention to this patch up to now.)

There's a new section in src/backend/storage/lmgr/README on Fast Path
Locking, plus comments at various places in the code.  It's certainly
possible I've missed something that should be updated, but I did my
best.

> I wonder though whether
> you actually need a *count*.  What if it were just a flag saying "do not
> take any fast path locks on this object", and once set it didn't get
> unset until there were no locks left at all on that object?

I think if you read the above-referenced section of the README you'll
be deconfused.  The short version is that we divide up the space of
lockable objects into 1024 partitions and the strong lock counts are
actually a count of all locks in the partition.  It is therefore
theoretically possible for locking to get slower on table A because
somebody's got an AccessExclusiveLock on table B, if the low-order 10
bits of the locktag hashcodes happen to collide.  In such a case, all
locks on both relations would be forced out of the fast path until the
AccessExclusiveLock was released. If it so happens that table A is
getting pounded with something that looks a lot like pgbench -S -c 32
-j 32 on a system with more than a couple of cores, the user will be
sad.  I judge that real-world occurrences of this problem will be
quite rare, since most people have adequate reasons for long-lived
strong table locks anyway, and 1024 partitions seemed like enough to
keep most people from suffering too badly.  I don't see any way to
eliminate the theoretical possibility of this while still having the
basic mechanism work, either, though we could certainly crank up the
partition count.

> In
> particular, it's not clear from what you're saying here why it's okay
> to let the value revert once you've changed some of the FP locks to
> regular locks.

It's always safe to convert a fast-path lock to a regular lock; it
just costs you some performance.  The idea is that everything that
exists as a fast-path lock is something that's certain not to have any
lock conflicts.  As soon as we discover that a particular lock might
be involved in a lock conflict, we have to turn it into a "real" lock.So if backends 1, 2, and 3 take fast-path locks
onA (to SELECT from 
it, for example) and then backend 4 wants an AccessExclusiveLock, it
will pull the locks from those backends out of the fast-path mechanism
and make regular lock entries for them before checking for lock
conflicts.  Then, it will discover that there are in fact conflicts
and go to sleep.  When those backends go to release their locks, they
will notice that their locks have been moved to the main lock table
and will release them there, eventually waking up backend 4 to go do
his thing.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: Revisiting extract(epoch from timestamp)
Next
From: Peter Eisentraut
Date:
Subject: Re: Last gasp