Re: foreign key locks, 2nd attempt - Mailing list pgsql-hackers
From | Noah Misch |
---|---|
Subject | Re: foreign key locks, 2nd attempt |
Date | |
Msg-id | 20111214152129.GA28988@tornado.leadboat.com Whole thread Raw |
In response to | Re: foreign key locks, 2nd attempt (Alvaro Herrera <alvherre@commandprompt.com>) |
Responses |
Re: foreign key locks, 2nd attempt
|
List | pgsql-hackers |
On Tue, Dec 13, 2011 at 06:36:21PM -0300, Alvaro Herrera wrote: > Excerpts from Noah Misch's message of dom dic 04 09:20:27 -0300 2011: > > > @@ -2725,11 +2884,20 @@ l2: > > > oldtup.t_data->t_infomask &= ~(HEAP_XMAX_COMMITTED | > > > HEAP_XMAX_INVALID | > > > HEAP_XMAX_IS_MULTI | > > > - HEAP_IS_LOCKED | > > > + HEAP_LOCK_BITS | > > > HEAP_MOVED); > > > + oldtup.t_data->t_infomask2 &= ~HEAP_UPDATE_KEY_INTACT; > > > HeapTupleClearHotUpdated(&oldtup); > > > /* ... and store info about transaction updating this tuple */ > > > - HeapTupleHeaderSetXmax(oldtup.t_data, xid); > > > + if (TransactionIdIsValid(keep_xmax_old)) > > > + { > > > + HeapTupleHeaderSetXmax(oldtup.t_data, keep_xmax_old); > > > + oldtup.t_data->t_infomask |= keep_xmax_old_infomask; > > > + } > > > + else > > > + HeapTupleHeaderSetXmax(oldtup.t_data, xid); > > > + if (key_intact) > > > + oldtup.t_data->t_infomask2 |= HEAP_UPDATE_KEY_INTACT; > > > HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo); > > > /* temporarily make it look not-updated */ > > > oldtup.t_data->t_ctid = oldtup.t_self; > > > > Shortly after this, we release the content lock and go off toasting the tuple > > and finding free space. When we come back, could the old tuple have > > accumulated additional KEY SHARE locks that we need to re-copy? > > Yeah, I've been wondering about this: do we have a problem already with > exclusion constraints? I mean, if a concurrent inserter doesn't see the > tuple that we've marked here as deleted while we toast it, it could > result in a violated constraint, right? I haven't built a test case to > prove it. Does the enforcement code for exclusion constraints differ significantly from the ordinary unique constraint code? If not, I'd expect the concurrent inserter to treat the tuple precisely like an uncommitted delete, in which case it will wait for the deleter. > I settled on this: > > /* > * If any subtransaction of the current top transaction already holds a > * lock as strong or stronger than what we're requesting, we > * effectively hold the desired lock already. We *must* succeed > * without trying to take the tuple lock, else we will deadlock against > * anyone wanting to acquire a stronger lock. > */ > Now, I can't see the reason that we didn't previously consider locks "as > strong as what we're requesting" ... but surely it's the same case? I think it does degenerate to the same case. When we hold an exclusive lock in master, HeapTupleSatisfiesUpdate() will return HeapTupleMayBeUpdated. So, we can only get here while holding a mere share lock. > > In both HEAP_XMAX_MULTI conditional blocks, you do not set HEAP_XMAX_INVALID > > for an aborted updater. What is the new meaning of HEAP_XMAX_INVALID for > > multixacts? What implications would arise if we instead had it mean that the > > updating xid is aborted? That would allow us to get the mid-term performance > > benefit of the hint bit when the updating xid spills into a multixact, and it > > would reduce code duplication in this function. > > Well, HEAP_XMAX_INVALID means the Xmax is not valid, period. If there's > a multi whose updater is aborted, there's still a multi that needs to be > checked in various places, so we cannot set that bit. Ah, yes. Perhaps a better question: would changing HEAP_XMAX_INVALID to HEAP_UPDATER_INVALID pay off? That would help HeapTupleSatisfiesMVCC() at the expense of HeapTupleSatisfiesUpdate(), probably along with other consequences I haven't contemplated adequately.
pgsql-hackers by date: