Thread: Re: referential Integrity and SHARE locks
Oops, forgot to include pgsql-hackers when I responded to this the first time. On Tue, 2007-06-02 at 20:53 -0500, Tom Lane wrote: > Marc Munro <marc@bloodnok.com> writes: > > The RI triggers currently fire when a record is updated. Under my > > proposal they would fire in the same way but before the record is locked > > rather than after. Or am I missing your point? > > IOW, some other transaction could update or delete the tuple meanwhile? > Doesn't seem very promising. > That other transaction, T1, would have run the same RI triggers and so would have the same parent records locked. The blocked transaction, T2, once T1 has committed, would fail. I don't see this as being much different from the current case, where T1 locks and deletes or updates a row, and T2 then tries to manipulate the same row. In both cases, locks manage the race for the row, and MVCC ensures that T2 fails. __ Marc
I am going to restate my earlier proposal, to clarify it and in the hope of stimulating more discussion. One of the causes of deadlocks in Postgres is that its referential integrity triggers can take locks in inconsistent orders. Generally a child record will be locked before its parent, but not in all cases. My proposal modifies the order in which locks are taken by referential integrity triggers, so that parents are always locked before their children. The proposal is, that referential integrity triggers should fire before locking the tuple from which they are triggered. I guess a new sort of trigger is required for this, a before-lock trigger. If this is a dumb idea, please tell me why. If it would cause more problems than it solves, ditto. If it would be difficult to implement, let's discuss and try to find solutions. __ Marc
Re: Reducing likelihood of deadlocks (was referential Integrity and SHARE locks)
From
Csaba Nagy
Date:
On Thu, 2007-02-08 at 17:47, Marc Munro wrote: > [snip] One of the causes of deadlocks in Postgres is that its referential > integrity triggers can take locks in inconsistent orders. Generally a > child record will be locked before its parent, but not in all cases. [snip] The problem is that eliminating the deadlock is still not the complete cake... the interlocking still remains, possibly leading to degraded performance on high contention on very common parent rows. The real solution would be when an update on the parent table's non-referenced fields is not interlocking at all with updates of the child rows... and I think there were some proposals to do that. In fact one possibility to avoid this problem is vertical partitioning, i.e. separating the non-key columns in a parallel table and updating them there. However this is a choice only when you know it beforehand and you're not inheriting the schema from other DBs... Cheers, Csaba.
Re: Reducing likelihood of deadlocks (was referential Integrity and SHARE locks)
From
Marc Munro
Date:
On Thu, 2007-08-02 at 18:06 +0100, Csaba Nagy wrote: > The problem is that eliminating the deadlock is still not the complete > cake... the interlocking still remains, possibly leading to degraded > performance on high contention on very common parent rows. The real > solution would be when an update on the parent table's non-referenced > fields is not interlocking at all with updates of the child rows... and > I think there were some proposals to do that. Agreed. There are two issues here, unnecessary blocking and deadlock. These can be tackled separately. My proposal deals only with the deadlock issue. Even if if contention is reduced, for instance by implementing column-level locking, there will still be the potential for deadlock arising from inconsistent ordering of locks. I continue to stand by my proposal. __ Marc
On Thu, 8 Feb 2007, Marc Munro wrote: > Oops, forgot to include pgsql-hackers when I responded to this the first > time. > > On Tue, 2007-06-02 at 20:53 -0500, Tom Lane wrote: > > Marc Munro <marc@bloodnok.com> writes: > > > The RI triggers currently fire when a record is updated. Under my > > > proposal they would fire in the same way but before the record is > locked > > > rather than after. Or am I missing your point? > > > > IOW, some other transaction could update or delete the tuple > meanwhile? > > Doesn't seem very promising. > > > > That other transaction, T1, would have run the same RI triggers and so > would have the same parent records locked. That's not true in the case of delete, since the referencing table triggers are on insert and update. Second, the parent record locks are not exclusive which means that both can be granted, so I don't see how this stops the second from continuing before the first.
On Thu, 2007-08-02 at 10:06 -0800, Stephan Szabo wrote: > On Thu, 8 Feb 2007, Marc Munro wrote: . . . > > > > That other transaction, T1, would have run the same RI triggers and so > > would have the same parent records locked. > > That's not true in the case of delete, since the referencing table > triggers are on insert and update. . . . Let me see if I have this scenario right: Transaction T1 updates child record C1, with RI causing the parent P1 to be locked before the child. In the meantime transaction T2, successfully deletes C1 as it has not yet been locked. (Please tell me if I have misunderstood what you are saying) Yes in this case, T1 must abort because the record it was going to update has disappeared from underneath it. I don't see how this is significantly different from the same race for the record if the table had no RI constraints. The only difference that I can see, is that T1 now has some locks that it must relinquish as the transaction aborts. > . . . Second, the parent record locks are not > exclusive which means that both can be granted, so I don't see how this > stops the second from continuing before the first. I don't think this does stop the second from continuing before the first. What will stop it, is the eventual lock that is taken on the child (triggering) record. I am not proposing reducing the number of locks taken, but rather changing the order in which the locks are taken. <concerned frown> What am I missing? </concerned frown> __ Marc
Marc Munro <marc@bloodnok.com> writes: > Yes in this case, T1 must abort because the record it was going to > update has disappeared from underneath it. I don't see how this is > significantly different from the same race for the record if the table > had no RI constraints. The only difference that I can see, is that T1 > now has some locks that it must relinquish as the transaction aborts. No, the difference is there would have been no error at all before; if the record were deleted before T1 got to it then it wouldn't have attempted to update it. I really don't think you can make it work to perform updates or deletes on a record you have not yet locked. regards, tom lane
On Thu, 2007-08-02 at 14:33 -0500, Tom Lane wrote: > Marc Munro <marc@bloodnok.com> writes: > > Yes in this case, T1 must abort because the record it was going to > > update has disappeared from underneath it. I don't see how this is > > significantly different from the same race for the record if the table > > had no RI constraints. The only difference that I can see, is that T1 > > now has some locks that it must relinquish as the transaction aborts. > > No, the difference is there would have been no error at all before; > if the record were deleted before T1 got to it then it wouldn't have > attempted to update it. I really don't think you can make it work > to perform updates or deletes on a record you have not yet locked. The record would be locked before the update or delete is attempted, however it would not be locked until the referential integrity constraints have succeeded in acquiring their locks. It is becoming clear to me that I am missing something but I still don't know what it is. If anyone can see it and explain it I'd really appreciate it. __ Marc
On Thu, 8 Feb 2007, Marc Munro wrote: > On Thu, 2007-08-02 at 10:06 -0800, Stephan Szabo wrote: > > On Thu, 8 Feb 2007, Marc Munro wrote: > . . . > > > > > > That other transaction, T1, would have run the same RI triggers and so > > > would have the same parent records locked. > > > > That's not true in the case of delete, since the referencing table > > triggers are on insert and update. . . . > > Let me see if I have this scenario right: > > Transaction T1 updates child record C1, with RI causing the parent P1 to > be locked before the child. > > In the meantime transaction T2, successfully deletes C1 as it has not > yet been locked. > > (Please tell me if I have misunderstood what you are saying) > > Yes in this case, T1 must abort because the record it was going to > update has disappeared from underneath it. I don't see how this is > significantly different from the same race for the record if the table > had no RI constraints. The only difference that I can see, is that T1 > now has some locks that it must relinquish as the transaction aborts. > > > . . . Second, the parent record locks are not > > exclusive which means that both can be granted, so I don't see how this > > stops the second from continuing before the first. > > I don't think this does stop the second from continuing before the > first. What will stop it, is the eventual lock that is taken on the > child (triggering) record. But at that point, you've already had to compose the new row in order to call the trigger for the ri check, right? So, one of those new rows will be out of date by the time it actually gets the lock. Presumably that means that you need to recalculate the new row, but you've already done a check and gotten a lock based on the old new row. Also, another big problem is the fact that SQL requires that the action already have happened before the check in cases where the constraint references the same table. The row being updated or inserted might reference a row that will be updated or inserted by a later action of the same statement.
On Thu, 2007-08-02 at 12:24 -0800, Stephan Szabo wrote: > On Thu, 8 Feb 2007, Marc Munro wrote: > > > I don't think this does stop the second from continuing before the > > first. What will stop it, is the eventual lock that is taken on the > > child (triggering) record. > > But at that point, you've already had to compose the new row in order to > call the trigger for the ri check, right? So, one of those new rows will > be out of date by the time it actually gets the lock. Presumably that > means that you need to recalculate the new row, but you've already done a > check and gotten a lock based on the old new row. Yes. That is tricky. For my proposed scheme to work, I guess we'd have to be able to drop those locks which were just acquired by the RI triggers. Not too simple, I guess. > Also, another big problem is the fact that SQL requires that the action > already have happened before the check in cases where the constraint > references the same table. The row being updated or inserted might > reference a row that will be updated or inserted by a later action of the > same statement. Hmmm. That does seem to be the final nail in the coffin. Consider the proposal withdrawn, and thanks for explaining it all to me. __ Marc
On 2/8/2007 2:46 PM, Marc Munro wrote: > On Thu, 2007-08-02 at 14:33 -0500, Tom Lane wrote: >> Marc Munro <marc@bloodnok.com> writes: >> > Yes in this case, T1 must abort because the record it was going to >> > update has disappeared from underneath it. I don't see how this is >> > significantly different from the same race for the record if the table >> > had no RI constraints. The only difference that I can see, is that T1 >> > now has some locks that it must relinquish as the transaction aborts. >> >> No, the difference is there would have been no error at all before; >> if the record were deleted before T1 got to it then it wouldn't have >> attempted to update it. I really don't think you can make it work >> to perform updates or deletes on a record you have not yet locked. > > The record would be locked before the update or delete is attempted, > however it would not be locked until the referential integrity > constraints have succeeded in acquiring their locks. > > It is becoming clear to me that I am missing something but I still don't > know what it is. If anyone can see it and explain it I'd really > appreciate it. I think you are missing the fact that the exclusive row lock on UPDATE is taken before any triggers are fired at all, even BEFORE ROW triggers. This is necessary in order to prevent the row being updated or removed concurrently while the triggers are executing. Since BEFORE ROW triggers can modify the content of the row (including the foreign key), the RI check and lock of the referenced row cannot happen before other BR triggers are completed. In order to make your idea fly, the RI check trigger on INSERT or UPDATE would have to be fired before taking the row lock considering the NEW values for referencing columns as they are thus far. Since the row isn't locked at this time, it can change or disappear while the RI trigger is executing, so the check and lock has to be redone later with the actual row that got locked and after all BR triggers are done with it. Jan -- #======================================================================# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #================================================== JanWieck@Yahoo.com #
Re: Reducing likelihood of deadlocks (was referential Integrity and SHARE locks)
From
"Jim C. Nasby"
Date:
On Thu, Feb 08, 2007 at 08:47:42AM -0800, Marc Munro wrote: > One of the causes of deadlocks in Postgres is that its referential > integrity triggers can take locks in inconsistent orders. Generally a > child record will be locked before its parent, but not in all cases. Where would PostgreSQL lock the parent before the child? AFAIK the behavior should be consistent... -- Jim Nasby jim@nasby.net EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
Re: Reducing likelihood of deadlocks (was referential Integrity and SHARE locks)
From
Marc Munro
Date:
On Sun, 2007-11-02 at 12:21 -0600, Jim C. Nasby wrote: > On Thu, Feb 08, 2007 at 08:47:42AM -0800, Marc Munro wrote: > > One of the causes of deadlocks in Postgres is that its referential > > integrity triggers can take locks in inconsistent orders. Generally a > > child record will be locked before its parent, but not in all cases. > > Where would PostgreSQL lock the parent before the child? AFAIK the > behavior should be consistent... Consider a table C containing 2 child records C1 and C2, of parent P. If transaction T1 updates C1 and C2, the locking order of the the records will be C1, P, C2. Another transaction, T2, that attempts to update only C2, will lock the records in order C2, P. The locks on C2 and P are taken in different orders by the two transactions, leading to the possibility of deadlock. __ Marc
Re: Reducing likelihood of deadlocks (was referential Integrity and SHARE locks)
From
Tom Lane
Date:
Marc Munro <marc@bloodnok.com> writes: > Consider a table C containing 2 child records C1 and C2, of parent P. > If transaction T1 updates C1 and C2, the locking order of the the > records will be C1, P, C2. Another transaction, T2, that attempts to > update only C2, will lock the records in order C2, P. > The locks on C2 and P are taken in different orders by the two > transactions, leading to the possibility of deadlock. But the lock on P is shared, hence no deadlock. regards, tom lane
Re: Reducing likelihood of deadlocks (was referential Integrity and SHARE locks)
From
Marc Munro
Date:
On Mon, 2007-12-02 at 00:10 -0500, Tom Lane wrote: > Marc Munro <marc@bloodnok.com> writes: > > Consider a table C containing 2 child records C1 and C2, of parent P. > > If transaction T1 updates C1 and C2, the locking order of the the > > records will be C1, P, C2. Another transaction, T2, that attempts to > > update only C2, will lock the records in order C2, P. > > > The locks on C2 and P are taken in different orders by the two > > transactions, leading to the possibility of deadlock. > > But the lock on P is shared, hence no deadlock. Doh! Yes, you are right. It is not that simple. For deadlock to occur, we need a transaction that takes an exclusive lock on P as well as on one of the children. Let us replace T2 with a new transaction, T3, which is going to update P and only one of its children. If T3 is going to update P and C1 without the possibility of deadlock against T1, then it must take out the locks in the order C1, P. If, on the other hand, it is going to update P and C2, then the locks must be taken in the order P, C2. This means that there is no single strategy we can apply to T3 that will guarantee to avoid deadlocks with transactions that update only C (ie transactions, which to a developers point of view do nothing to P, and so should be unable to deadlock with T3). From an application developer's standpoint there are few options, none of them ideal: 1) Insist on a locking policy that requires updates to first lock their parent records. This is horrible for so many reasons. It should be unnecessary; it causes exclusive locking on parent records, thereby eliminating the gains made by introducing row share locks in 8.1; it is onerous on the developers; it is error-prone; etc 2) Remove FK constraints to eliminate the possibility of RI-triggered deadlocks. Ugh. 3) Encapsulate all transactions in some form of retry mechanism that traps deadlocks and retries those transactions. This may not be practicable, and incurs all of the overhead of encountering and trapping deadlocks in the first place. Also, as each deadlock occurs, a number of locks will be left active before deadlock detection kicks in, increasing the window for further deadlocks. On a busy system, the first deadlock may well trigger a cascade of further deadlocks. __ Marc
Re: Reducing likelihood of deadlocks (was referential Integrity and SHARE locks)
From
Tom Lane
Date:
Marc Munro <marc@bloodnok.com> writes: > From an application developer's standpoint there are few options, none > of them ideal: How about 4) Make all the FK constraints deferred, so that they are only checked at end of transaction. Then the locking order of transactions that only modify C is always C1, C2, ..., P. regards, tom lane
Re: Reducing likelihood of deadlocks (was referential Integrity and SHARE locks)
From
Marc Munro
Date:
On Tue, 2007-13-02 at 11:38 -0500, Tom Lane wrote: > Marc Munro <marc@bloodnok.com> writes: > > From an application developer's standpoint there are few options, none > > of them ideal: > > How about > > 4) Make all the FK constraints deferred, so that they are only checked > at end of transaction. Then the locking order of transactions that only > modify C is always C1, C2, ..., P. Excellent suggestion. Thank you. __ Marc
Re: Reducing likelihood of deadlocks (was referential Integrity and SHARE locks)
From
Alex Hayward
Date:
On Tue, 13 Feb 2007, Marc Munro wrote: > On Mon, 2007-12-02 at 00:10 -0500, Tom Lane wrote: > > Marc Munro <marc@bloodnok.com> writes: > > > Consider a table C containing 2 child records C1 and C2, of parent P. > > > If transaction T1 updates C1 and C2, the locking order of the the > > > records will be C1, P, C2. Another transaction, T2, that attempts to > > > update only C2, will lock the records in order C2, P. > > > > > The locks on C2 and P are taken in different orders by the two > > > transactions, leading to the possibility of deadlock. > > > > But the lock on P is shared, hence no deadlock. > > Doh! Yes, you are right. It is not that simple. > > For deadlock to occur, we need a transaction that takes an exclusive > lock on P as well as on one of the children. Let us replace T2 with a > new transaction, T3, which is going to update P and only one of its > children. > > If T3 is going to update P and C1 without the possibility of deadlock > against T1, then it must take out the locks in the order C1, P. If, on > the other hand, it is going to update P and C2, then the locks must be > taken in the order P, C2. > > This means that there is no single strategy we can apply to T3 that will > guarantee to avoid deadlocks with transactions that update only C (ie > transactions, which to a developers point of view do nothing to P, and > so should be unable to deadlock with T3). This scenario would do it, too: Table X has rows representing an object of some kind. These objects contain other objects, which are represented by rowsin table Y. Suppose X stores a count of the Y objects it contains in a particular status (because your application needsto get this quickly) and suppose the count is updated by a trigger. The Y objects hold references to the containingX, checked by FK constraints. A query which updates the status of one or more Ys can deadlock with another instance of itself. It first locks a Y row,then shared-locks an X row, then updates the X row (when the trigger runs). Two transactions could get to the shared-lockstage simultaneously, then deadlock. I've come across some a bit like this in my own applications. I'm sure there are many, many, others. > From an application developer's standpoint there are few options, none > of them ideal: > > 1) Insist on a locking policy that requires updates to first lock their > parent records. > > This is horrible for so many reasons. It should be unnecessary; it > causes exclusive locking on parent records, thereby eliminating the > gains made by introducing row share locks in 8.1; it is onerous on the > developers; it is error-prone; etc I once tried to define a locking order for rows in a database. It doesn't work (though this was at a time when FK constraint checking used FOR UPDATE locks, which, of course, made things much worse). This wasn't specifically for FK checks, but they were an important cause of deadlocks. Firstly, you have no idea of the order in which rows locked by a statement will be locked. UPDATE d SET counter=counter+1 WHERE d.a=1 could deadlock with UPDATE d SET counter=counter+1 WHERE d.b=1. Secondly, even if you could defining a usable locking order across all of the rows in your database (not just the tables) is nearly impossible. I suppose you could base it on, say, the order (tablename, id) but you'd have to go to extreme lengths to follow this. Imagine having to determine the id of every row you want to update and the ID and table name of every row they'll lock because of FK constraints and then sort a big set of 'SELECT .. FOR SHARE' and 'UPDATE' statements. That's a lot of queries - and huge variety of useful queries you can't use any more. And once you've done all that you might find your application has race conditions - there are sometimes other reasons for performing queries in a certain order. > 2) Remove FK constraints to eliminate the possibility of RI-triggered > deadlocks. > > Ugh. Deadlocks aren't the only problem FK constraints' locks are going to cause you. It's quite possible you have, somewhere, a small number of rows referenced via FKs by a huge number of rows. Think about the amount of locking (not just locks on rows, but internal locks on bits of cache) and cache coherency logic going on in a production SMP machine. It'll depend on your load and schema, of course, but I've found the constraints to be mostly impractical in my production systems. Some I can keep, but only if I know that the checks aren't going to be triggered too often. > 3) Encapsulate all transactions in some form of retry mechanism that > traps deadlocks and retries those transactions. > > This may not be practicable, and incurs all of the overhead of > encountering and trapping deadlocks in the first place. Also, as each > deadlock occurs, a number of locks will be left active before deadlock > detection kicks in, increasing the window for further deadlocks. On a > busy system, the first deadlock may well trigger a cascade of further > deadlocks. It's essential to do this (actually, I always presumed that not checking for deadlocks was a typical database-newbie mistake and that everyone knew they needed to; I don't seem to see it mentioned on here much, though). Unless you have a very simple dataset and simple queries you are not going to be able to guarantee you won't get deadlocks, from FK checks or otherwise. You could check every transaction against every other transaction, I suppose...but you might not know what those are at compile time, and it'll be a very painful task that'll have to be repeated for every new query, constraint or trigger you add. You might not even have control over them. It's also dependent on the inner workings of the database and bits of schema you aren't really meant to know (consider the type of lock taken by a FK check, the time it's taken, the order in which the FK checks are performed and the order of triggers generally).
Added to TODO: * Allow UPDATEs on only non-referential integrity columns not to conflict with referential integrity locks http://archives.postgresql.org/pgsql-hackers/2007-02/msg00073.php --------------------------------------------------------------------------- Jan Wieck wrote: > On 2/8/2007 2:46 PM, Marc Munro wrote: > > On Thu, 2007-08-02 at 14:33 -0500, Tom Lane wrote: > >> Marc Munro <marc@bloodnok.com> writes: > >> > Yes in this case, T1 must abort because the record it was going to > >> > update has disappeared from underneath it. I don't see how this is > >> > significantly different from the same race for the record if the table > >> > had no RI constraints. The only difference that I can see, is that T1 > >> > now has some locks that it must relinquish as the transaction aborts. > >> > >> No, the difference is there would have been no error at all before; > >> if the record were deleted before T1 got to it then it wouldn't have > >> attempted to update it. I really don't think you can make it work > >> to perform updates or deletes on a record you have not yet locked. > > > > The record would be locked before the update or delete is attempted, > > however it would not be locked until the referential integrity > > constraints have succeeded in acquiring their locks. > > > > It is becoming clear to me that I am missing something but I still don't > > know what it is. If anyone can see it and explain it I'd really > > appreciate it. > > I think you are missing the fact that the exclusive row lock on UPDATE > is taken before any triggers are fired at all, even BEFORE ROW triggers. > This is necessary in order to prevent the row being updated or removed > concurrently while the triggers are executing. Since BEFORE ROW triggers > can modify the content of the row (including the foreign key), the RI > check and lock of the referenced row cannot happen before other BR > triggers are completed. > > In order to make your idea fly, the RI check trigger on INSERT or UPDATE > would have to be fired before taking the row lock considering the NEW > values for referencing columns as they are thus far. Since the row isn't > locked at this time, it can change or disappear while the RI trigger is > executing, so the check and lock has to be redone later with the actual > row that got locked and after all BR triggers are done with it. > > > Jan > > -- > #======================================================================# > # It's easier to get forgiveness for being wrong than for being right. # > # Let's break this rule - forgive me. # > #================================================== JanWieck@Yahoo.com # > > ---------------------------(end of broadcast)--------------------------- > TIP 1: if posting/reading through Usenet, please send an appropriate > subscribe-nomail command to majordomo@postgresql.org so that your > message can get through to the mailing list cleanly -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. +