Re: Transaction Exception Question - Mailing list pgsql-general
From | Jon Swinth |
---|---|
Subject | Re: Transaction Exception Question |
Date | |
Msg-id | 200208131742.g7DHg8M02553@localhost.localdomain Whole thread Raw |
In response to | Re: Transaction Exception Question ("scott.marlowe" <scott.marlowe@ihs.com>) |
Responses |
Re: Transaction Exception Question
|
List | pgsql-general |
Thanks Scott for your reply. I don't agree that an insert/update/delete error should automatically abort the transaction. You have not provided for the fact that the error may be handled. I will give you an example that makes my case. Lets say you have an inventory table. The inventory table has a primary key of an integer and a unique key of location and product. The unique key makes sure that there is only one record for each product against a single location. Now imagine that you have a high volume database with many clients and you have a process that attempts to put quantity of a product into a location. That process would first select to see if the record already existed so it could be update and then insert a row when it wasn't found. Now imagine that this is just part of a long running transaction and that multiple clients will want to put more of the same product in the same location. Here is what happens with Postgre: Client A runs the process sees that there is no record, inserts, and goes on to the next thing to be done in the same transaction. Client B runs the process, sees that there is no record because Client A has not commited, attempts an insert, and blocks until Client A commit or rollback. Client A commits, Client B gets an exception and is now forced to rollback everything else in the transaction. Here is what happens with another DB (ex. Oracle) that doesn't abort the transaction: When client B gets the exception, the exception is caught by the process, the process selects back the newly created row from Client A, and the existing record is updated. You may not think that this would happen very often, but my experience says otherwise. As the number of clients goes up and the DB machine begins to slow down from the strain, this can happen a great deal. Just because a statement has been issued that results in an error does not automatically mean that (1) all the other statements in the transaction are not valid and (2) that the application code does not have a work around for that error. Whether the transaction should be rolled back or not is a question for the application/client, not the DB. On Tuesday 13 August 2002 09:40 am, scott.marlowe wrote: > On Tue, 13 Aug 2002, Jon Swinth wrote: > > A while back, I ran into a problem that turned out to be in Postgre on > > purpose. In a long running transaction (or any transaction for that > > matter) if an exception is thrown then you have no choice but to rollback > > that transaction. Is there someone that can point me in the right > > direction in finding out why this would be? It has bitten me a few times > > and will limit Postgre's ability to work in a high volume operation. > > Seeing as how the purpose of a transaction is to ensure that either all > the changes to the database are made or none are made, I'm not sure what > should change about this behaviour. > > Or were you looking for things like commit / rollback segments? In > general, instead of using commit / rollback segments I just do a begin / > end pair around each set of data that I would have used a commit / > rollback segment. > > Sometimes I think postgresql's tendency to get pedantic about which errors > cause an auto abort is a little bothersome (i.e. an error thrown by a > select or set statement will abort the current transaction) but for > update/delete/insert commands, and single error SHOULD cause the whole > transaction to abort, thus ensuring transactional integrity.
pgsql-general by date: