pgsql: Optimize locking a tuple already locked by another subxact - Mailing list pgsql-committers

From Alvaro Herrera
Subject pgsql: Optimize locking a tuple already locked by another subxact
Date
Msg-id E1Ygc8n-0006e3-DT@gemulon.postgresql.org
Whole thread Raw
List pgsql-committers
Optimize locking a tuple already locked by another subxact

Locking and updating the same tuple repeatedly led to some strange
multixacts being created which had several subtransactions of the same
parent transaction holding locks of the same strength.  However,
once a subxact of the current transaction holds a lock of a given
strength, it's not necessary to acquire the same lock again.  This made
some coding patterns much slower than required.

The fix is twofold.  First we change HeapTupleSatisfiesUpdate to return
HeapTupleBeingUpdated for the case where the current transaction is
already a single-xid locker for the given tuple; it used to return
HeapTupleMayBeUpdated for that case.  The new logic is simpler, and the
change to pgrowlocks is a testament to that: previously we needed to
check for the single-xid locker separately in a very ugly way.  That
test is simpler now.

As fallout from the HTSU change, some of its callers need to be amended
so that tuple-locked-by-own-transaction is taken into account in the
BeingUpdated case rather than the MayBeUpdated case.  For many of them
there is no difference; but heap_delete() and heap_update now check
explicitely and do not grab tuple lock in that case.

The HTSU change also means that routine MultiXactHasRunningRemoteMembers
introduced in commit 11ac4c73cb895 is no longer necessary and can be
removed; the case that used to require it is now handled naturally as
result of the changes to heap_delete and heap_update.

The second part of the fix to the performance issue is to adjust
heap_lock_tuple to avoid the slowness:

1. Previously we checked for the case that our own transaction already
held a strong enough lock and returned MayBeUpdated, but only in the
multixact case.  Now we do it for the plain Xid case as well, which
saves having to LockTuple.

2. If the current transaction is the only locker of the tuple (but with
a lock not as strong as what we need; otherwise it would have been
caught in the check mentioned above), we can skip sleeping on the
multixact, and instead go straight to create an updated multixact with
the additional lock strength.

3. Most importantly, make sure that both the single-xid-locker case and
the multixact-locker case optimization are applied always.  We do this
by checking both in a single place, rather than them appearing in two
separate portions of the routine -- something that is made possible by
the HeapTupleSatisfiesUpdate API change.  Previously we would only check
for the single-xid case when HTSU returned MayBeUpdated, and only
checked for the multixact case when HTSU returned BeingUpdated.  This
was at odds with what HTSU actually returned in one case: if our own
transaction was locker in a multixact, it returned MayBeUpdated, so the
optimization never applied.  This is what led to the large multixacts in
the first place.

Per bug report #8470 by Oskari Saarenmaa.

Branch
------
master

Details
-------
http://git.postgresql.org/pg/commitdiff/27846f02c176eebe7e08ce51ed4d52140454e196

Modified Files
--------------
contrib/pgrowlocks/pgrowlocks.c        |    9 +-
src/backend/access/heap/heapam.c       |  440 ++++++++++++++++----------------
src/backend/access/transam/multixact.c |   35 +--
src/backend/utils/time/tqual.c         |   21 +-
src/include/access/multixact.h         |    1 -
5 files changed, 237 insertions(+), 269 deletions(-)


pgsql-committers by date:

Previous
From: Peter Eisentraut
Date:
Subject: pgsql: libpq: Don't overwrite existing OpenSSL thread callbacks
Next
From: Peter Eisentraut
Date:
Subject: Re: pgsql: Mark the second argument of pg_log as the translatable string in