On Fri, 2003-09-05 at 15:29, Jan Wieck wrote:
> However, even with nested transactions and exceptions and all that, your
> problem will not be cleanly solvable. You basically have 2 choices,
> trying the INSERT first and if that fails with a duplicate key then do
> the UPDATE, or try the UPDATE first and if no rows got hit do an INSERT.
> Now if 2 concurrent transactions do try the UPDATE they can both not
> find the row and do INSERT - one has a dupkey error. But if you try to
> INSERT and get a duplicate key, in the time between you get the error
> and issue the UPDATE someone else can issue a DELETE - the row is gone
> and your UPDATE will fail.
>
> So you have to go into a loop and try INSERTorUPDATEorINSERT... until
> you either get bored or succeed ... that's not exactly what I call a
> solution.
You're completely right on that not all possible problems are solved by
this, but different solutions are better or worse based also on the odds
for the problem to actually occur. My code can deal with broken
transactions, it's just unacceptable if they are broken too often - that
slows down the system. You must admit that the odds of the
insert-update-delete to happen at the same time is much lower than just
2 inserts happening at the same time, whatever the application usage
pattern would be. In particular, it's fairly easy to make sure there's
no delete when updates happen: select the row for update. Only the
insert is the problem cause you don't have the row to lock beforehand.
Cheers,
Csaba.