Thread: Errors inside transactions
I've been using transactions fairly heavily as a means of ensuring that a group of related updates would look atomic to other processes watching the database. For some reason I'd never thought twice about the implications of a transaction block for error recovery. But they're there, and they're nasty: if one command in a transaction fails (and this is a failure in the server's terms, not according to what the application's logic is) then every subsequent command gets ignored until an END TRANSACTION is seen. I made the mistake of writing some code that did an INSERT it knew might fail (due to key collision in a unique index). The code was prepared to deal with the failure. No problem. Except it was being executed in an application that likes to wrap a transaction block around rather large chunks of processing. Result: hundreds of subsequent commands were ignored, silently. OK, there were notices getting printed on stderr, but that didn't help the application any. Didn't help the user either in this particular case, because he had iconized the shell window he'd started the application from. He lost several hours' work, and I'm looking at a total redesign of the application logic to get around this. The reason I say "total redesign" is that I'm afraid to just rewrite the chunk of code that does the one INSERT. Now that I've been bit once, I know that transactions are horribly dangerous: get an error that you *think* you've recovered from, and you're still screwed if you forgot to issue an END or ABORT TRANSACTION. Your code won't recover until you shut down and restart your app. So I'm going to have to find a way to guarantee that all possible paths of execution will do the END TRANSACTION, and that looks painful. Is there, or could there be, such a thing as a "CONTINUE TRANSACTION" command to clear the "aborted transaction" status after an expected error? Without that, either I can't use transactions, or I can never dare use a command that might return a failure message. Also, I feel that it's a mistake to treat "queries ignored until END" as a mere informational message. It ought to be transmitted to the frontend as an *error condition*. Your command did not execute, and pretending that there's not something wrong is not helpful, especially not to application code that's not designed to pay close attention to NOTICE messages. regards, tom lane
> I made the mistake of writing some code that did an INSERT it knew > might fail (due to key collision in a unique index). The code was > prepared to deal with the failure. No problem. Except it was being > executed in an application that likes to wrap a transaction block > around rather large chunks of processing. Result: hundreds of > subsequent commands were ignored, silently. > ... I'm afraid to just rewrite > the chunk of code that does the one INSERT. Now that I've been bit > once, I know that transactions are horribly dangerous: get an error > that you *think* you've recovered from, and you're still screwed if > you forgot to issue an END or ABORT TRANSACTION. Your code won't > recover until you shut down and restart your app. So I'm going to > have to find a way to guarantee that all possible paths of execution > will do the END TRANSACTION, and that looks painful. > > Is there, or could there be, such a thing as a "CONTINUE TRANSACTION" > command to clear the "aborted transaction" status after an expected > error? Without that, either I can't use transactions, or I can never > dare use a command that might return a failure message. Why not have your error handler issue a new BEGIN statement? Or an END/BEGIN pair if that is what is needed? afaik that wouldn't be any different than the proposed "CONTINUE TRANSACTION" statement. Or do you really want to commit the previous statements in the transaction rather than throw them away? If so, you didn't really need the transaction block there anyway, since you are willing to accept a partial result. But I'm vaguely recalling that my Ingres apps used to be able to choose between COMMIT and ROLLBACK when an error occurred in a transaction. I'll bet that is the behavior you are asking for. Don't know whether the Postgres backend knows what the state prior to the current command was. > Also, I feel that it's a mistake to treat "queries ignored until END" > as a mere informational message. It ought to be transmitted to the > frontend as an *error condition*. Your command did not execute, and > pretending that there's not something wrong is not helpful, especially > not to application code that's not designed to pay close attention to > NOTICE messages. That sounds like a good idea to me too. Are there any downsides to it? - Tom
"Thomas G. Lockhart" <lockhart@alumni.caltech.edu> writes: > Why not have your error handler issue a new BEGIN statement? Or an > END/BEGIN pair if that is what is needed? afaik that wouldn't be any > different than the proposed "CONTINUE TRANSACTION" statement. > Or do you really want to commit the previous statements in the > transaction rather than throw them away? If so, you didn't really need > the transaction block there anyway, since you are willing to accept a > partial result. You're right, I don't especially need or want the rollback aspect of transactions. This is a shared database with half a dozen processes watching it, and what I do want is to ensure that the other processes will not see the logically-inconsistent state that occurs while updating several related tuples with multiple UPDATE commands. I guess what I'd really like is to be able to separate the atomic-update and rollback features of transactions. If I could use "LOCK relation" without having to be inside a transaction, I'd be a much happier camper. (This'd at least imply an UNLOCK statement, I suppose. Are there deeper relationships between locks and transactions?) regards, tom lane