Re: [RESEND] Transaction auto-abort causes grief with Spring Framework - Mailing list pgsql-general
From | Ted Byers |
---|---|
Subject | Re: [RESEND] Transaction auto-abort causes grief with Spring Framework |
Date | |
Msg-id | 466466.31305.qm@web88306.mail.re4.yahoo.com Whole thread Raw |
In response to | Re: [RESEND] Transaction auto-abort causes grief with Spring Framework (Tyson Lloyd Thwaites <tyson.lloydthwaites@allianza.com.au>) |
List | pgsql-general |
Sorry for top posting - but this is an annoying of this web interface to email. :-(
Isn't what you're doing here a misuse of the idea of a transaction. I don't claim to be an expert in this, but I thought the idea of a transaction was that you bundle a group of statements together that must all succeed or all fail together; so if any one of them fails, all the preceding work completed in the transaction gets rolled back. Did I misunderstand something here, then? If not, then the operations you describe for your first transaction really ought not be in the same transaction.
Now, when I run a script of SQL statements in MS SQL Server, a statement that encounters an error will simply stop, and then control can be passed to the next SQL statement in the script, BUT THIS IS NOT IN A TRANSACTION!!! It is only a script, and in the context in which I was working, I didn't need to use transactions. And, I can provide additional options in the statements that will modify this behaviour, so the script aborts at the error condition rather than continuing with the next SQL statement in the script. With some statements, what I would regard as an error requiring the script to abort seem to be regarded as merely a warning by default. For example, in a bulk load of data into a table, and there is a problem with the data for the tenth field on the second row, the first row, which is correct, is stored, the statement aborts at the problem row, and control is passed to the next SQL statement. In my situations, I had other programs that would clean up the data if this sort of problem arises, so where the problem is seen by default as warranting only a warning, I could upgrade it to be regarded as an error. And of course, there are options for controlling how the script behaves when an error occurs. Perhaps that is involved in the behaviour you are reporting for MS SQL Server?? I haven't investigated this myself as I haven't had the problem you describe.
I didn't quite understand your description, in another post, of how Spring treats your database statements. Am I to understand it puts all your SQL statements into a single transaction? If so, either they badly mishandle transactions or they are working with a very different concept of what a transaction is.
One last question. You describe part of your problem as being correct addition of data to an audit table. If I haven't misunderstood what you're doing, isn't it incomplete if you record only statement failures? When I deal with audits, I put the logic into triggers whenever possible. And I'd maintain an independant error log from my application code, probably in a catch clause, and this either goes to a flat file or uses an independant connection to the database. And my practice is to use separate try/catch blocks for each statement that can throw an exception, to improve the granularity of error handling logic. That is the only way to have a chance of getting one statement to execute regardless of whether or not a preceding statement throws an exception.
I have a special interest in this because I am just beginning to look at Spring (I downloaded it just a few days ago).
Cheers,
Ted
Tyson Lloyd Thwaites <tyson.lloydthwaites@allianza.com.au> wrote:
Tyson Lloyd Thwaites <tyson.lloydthwaites@allianza.com.au> wrote:
Our app uses system state. We scan filesystems and record file
information in a database.
Here is one example:
- insert 250 files
- update some stats (MUST work even if insert fails)
- update agent last-contact time (also must work so we know it's not awol)
When last insert has finished, server will kick into summary mode:
- wait for asynchronous inserts to finish
- summarise data
- do some misc operations
- switch back to 'Idle' state
That last step is where the trick is: if anything goes wrong with the
previous operations, we MUST be able to put the system back into an
'Idle' state. Currently we do this in a catch block, ie:
try {
} catch {
}
Now of course that won't work in pg. We use the same logic all through
the system, so you can see my problem. For example, if the user deletes
a drive record that still has referential links to it, we do this:
try {
} catch (ReferentialIntegrityException e) {
}
We rely on the fact that we can still do things and commit a transaction
even if a single statement has failed.
The above drive delete case is where I first noticed the problem when
switching from MSSQL: instead of the server handling the exception and
doing something else instead, the 'something else' also threw an
exception (txn is aborted), which propagated to the client end.
UPDATE: Webb has proposed that this behaviour may be MSSQL misbehaving.
--
Tyson Lloyd Thwaites
Lead Software Developer
Allianza Services Pty Ltd
M: 0404 456 858
P: (+618) 8232 5525
F: (+618) 8232 8252
W: www.allianza.com.au
---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
pgsql-general by date: