Re: BUG #8470: 9.3 locking/subtransaction performance regression - Mailing list pgsql-bugs
From | Andres Freund |
---|---|
Subject | Re: BUG #8470: 9.3 locking/subtransaction performance regression |
Date | |
Msg-id | 20131230181151.GG12910@alap2.anarazel.de Whole thread Raw |
In response to | Re: BUG #8470: 9.3 locking/subtransaction performance regression (Alvaro Herrera <alvherre@2ndquadrant.com>) |
Responses |
Re: BUG #8470: 9.3 locking/subtransaction performance
regression
|
List | pgsql-bugs |
On 2013-12-23 16:53:00 -0300, Alvaro Herrera wrote: > + /* > + * If any subtransaction of the current top transaction already holds > + * a lock on this tuple, (and no one else does,) we must skip sleeping > + * on the xwait; that would raise an assert about sleeping on our own > + * Xid (or sleep indefinitely in a non-asserts-enabled build.) > + * > + * Note we don't need to do anything about this when the Xmax is a > + * multixact, because that code is already prepared to ignore > + * subtransactions of the current top transaction; also, trying to do > + * anything other than sleeping during a delete when someone else is > + * holding a lock on this tuple would be wrong. I don't really understand why those two issues are in the same paragraph, what's their relation? Just that a deleting xact shouldn't be a multi? > + * This must be done before acquiring our tuple lock, to avoid > + * deadlocks with other transaction that are already waiting on the > + * lock we hold. > + */ > + if (!(tp.t_data->t_infomask & HEAP_XMAX_IS_MULTI) && > + TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetRawXmax(tp.t_data))) > + { > + Assert(HEAP_XMAX_IS_LOCKED_ONLY(tp.t_data->t_infomask)); > + > + goto continue_deletion; > + } Man, the control flow in heapam.c isn't getting simpler :/. Could you rename the label to perform_deletion or such? It's no really continuing it from my POV. > + /* > + * If any subtransaction of the current top transaction already holds > + * a lock on this tuple, (and no one else does,) we must skip sleeping > + * on the xwait; that would raise an assert about sleeping on our own > + * Xid (or sleep indefinitely in a non-assertion enabled build.) I'd reformulate this to note that we'd wait for ourselves - perhaps noting that that fact would be caught by an assert in assert enabled builds. The assert isn't the real reason we cannot do this. > /* > - * We might already hold the desired lock (or stronger), possibly under a > - * different subtransaction of the current top transaction. If so, there > - * is no need to change state or issue a WAL record. We already handled > - * the case where this is true for xmax being a MultiXactId, so now check > - * for cases where it is a plain TransactionId. > - * > - * Note in particular that this covers the case where we already hold > - * exclusive lock on the tuple and the caller only wants key share or > - * share lock. It would certainly not do to give up the exclusive lock. > - */ > - if (!(old_infomask & (HEAP_XMAX_INVALID | > - HEAP_XMAX_COMMITTED | > - HEAP_XMAX_IS_MULTI)) && > - (mode == LockTupleKeyShare ? > - (HEAP_XMAX_IS_KEYSHR_LOCKED(old_infomask) || > - HEAP_XMAX_IS_SHR_LOCKED(old_infomask) || > - HEAP_XMAX_IS_EXCL_LOCKED(old_infomask)) : > - mode == LockTupleShare ? > - (HEAP_XMAX_IS_SHR_LOCKED(old_infomask) || > - HEAP_XMAX_IS_EXCL_LOCKED(old_infomask)) : > - (HEAP_XMAX_IS_EXCL_LOCKED(old_infomask))) && > - TransactionIdIsCurrentTransactionId(xmax)) > - { > - LockBuffer(*buffer, BUFFER_LOCK_UNLOCK); > - /* Probably can't hold tuple lock here, but may as well check */ > - if (have_tuple_lock) > - UnlockTupleTuplock(relation, tid, mode); > - return HeapTupleMayBeUpdated; > - } Are you sure transforming this hunk the way you did is functionally equivalent? > if (!MultiXactIdIsRunning(xmax)) > { > - if (HEAP_XMAX_IS_LOCKED_ONLY(old_infomask) || > - TransactionIdDidAbort(MultiXactIdGetUpdateXid(xmax, > - old_infomask))) > + if ((HEAP_XMAX_IS_LOCKED_ONLY(old_infomask) || > + !TransactionIdDidCommit(MultiXactIdGetUpdateXid(xmax, > + old_infomask)))) Heh, that's actually a (independent) bug... What's the test coverage for these cases? It better be 110%... Greetings, Andres Freund
pgsql-bugs by date: