Thread: Concurrency question
Hi, I'm trying to figure out the best way to handle the following situation. There are two processes, A, and B. A is a daemon process that performs a select, and then slowly iterates over the results, performing updates along the way. It is possible that interactive process B comes along, and wants to change the data that A is working with. B should not 1) hang or 2) fail (it's interactive, and in this case the user is always right). It's not optimal, but it would be ok if A failed - indeed, it would be better than if it kept working with the (now incorrect) data that it pulled from the select prior to the user's intervention. Thoughts? Thankyou for any insight you can send my way! -- David N. Welton - http://www.dedasys.com/davidw/ Linux, Open Source Consulting - http://www.dedasys.com/
On Tue, 2006-03-28 at 14:56 +0200, David Welton wrote: > There are two processes, A, and B. > > A is a daemon process that performs a select, and then slowly iterates > over the results, performing updates along the way. > > It is possible that interactive process B comes along, and wants to > change the data that A is working with. B should not 1) hang or 2) > fail (it's interactive, and in this case the user is always right). > It's not optimal, but it would be ok if A failed - indeed, it would be > better than if it kept working with the (now incorrect) data that it > pulled from the select prior to the user's intervention. A should use serializable transaction, so it will fail whenever it sees a row updated by B. That way A will fail as you request. Try breaking down the A query with LIMIT/OFFSET so that it never holds locks for long. That way B will not wait for long, if at all, and will not fail. Best Regards, Simon Riggs
> Try breaking down the A query with LIMIT/OFFSET so that it never holds > locks for long. That way B will not wait for long, if at all, and will > not fail. Just as a remark, this will only work if the chunks can be processed in separate transactions. If the whole thing is related and A must be completely wrapped in a transaction, then the locks placed by the first queries will still hold until the end of the transaction... Cheers, Csaba.
[ Oops, I missed the reply-to button the first time - sorry for the repeat, Csaba ] On 3/28/06, Csaba Nagy <nagy@ecircle-ag.com> wrote: > > Try breaking down the A query with LIMIT/OFFSET so that it never holds > > locks for long. That way B will not wait for long, if at all, and will > > not fail. > > Just as a remark, this will only work if the chunks can be processed in > separate transactions. If the whole thing is related and A must be > completely wrapped in a transaction, then the locks placed by the first > queries will still hold until the end of the transaction... The current system we have is plain broken, as it has no transactions for anything, and we are getting bad results, occasionally. I'm not sure if it's possible to split A (the long running, non-interactive part) into multiple pieces. It starts first (if not, then there is no problem), and if a user comes along and does B, that needs to stomp on whatever A happened to be doing. It would probably be best if A just failed and rolled back at that point (it'll get run again in a few minutes, in any case). I looked at the concurrency bits of the manual on line, but I was getting the impression that B is the one that would potentially have problems if it tried to write while A was doing its business (reading and writing). Thankyou, -- David N. Welton - http://www.dedasys.com/davidw/ Linux, Open Source Consulting - http://www.dedasys.com/