Thread: Current transaction is aborted, commands ignored until end of transaction block

Current transaction is aborted, commands ignored until end of transaction block

From
Jan Bakuwel
Date:
Hi,

Maybe there is a simple solution for PostgreSQL behaviour that is
annoying me...

I've got users making updates to a master table and a number of detail
tables. All changes to the master record and related detail records are
encapsulated in a transaction so everything can be rolled back if
necessary and also to lock those master and related records for the user
making the changes.

When they do something that violates a constraint (for example adding a
duplicate detail record where that is not allowed), PostgreSQL aborts
the transaction. What I would much rather have is that PostgreSQL
returns an error but does not cancel the transaction as it's perfectly
OK (from a user's point of view) to try to do something that violates a
constraint.

What annoys me is that I don't think that a constraint violation made by
a user should result in an aborted transaction. There is probably a very
good reason to do that however the logic escapes me...

Of course I can start testing existing values in the database before
accepting them in the user interface but that's putting the horse behind
the cart. I much rather use the constraints at the database level to
tell me a particular update can't be done and do that without loosing
everything else I happened to have done in that transaction until that
point.

Any suggestions?

Jan



Re: Current transaction is aborted, commands ignored until end of transaction block

From
David Johnston
Date:
On Dec 29, 2011, at 23:25, Jan Bakuwel <jan.bakuwel@greenpeace.org> wrote:

> Hi,
>
> Maybe there is a simple solution for PostgreSQL behaviour that is
> annoying me...
>
> I've got users making updates to a master table and a number of detail
> tables. All changes to the master record and related detail records are
> encapsulated in a transaction so everything can be rolled back if
> necessary and also to lock those master and related records for the user
> making the changes.
>
> When they do something that violates a constraint (for example adding a
> duplicate detail record where that is not allowed), PostgreSQL aborts
> the transaction. What I would much rather have is that PostgreSQL
> returns an error but does not cancel the transaction as it's perfectly
> OK (from a user's point of view) to try to do something that violates a
> constraint.
>
> What annoys me is that I don't think that a constraint violation made by
> a user should result in an aborted transaction. There is probably a very
> good reason to do that however the logic escapes me...
>
> Of course I can start testing existing values in the database before
> accepting them in the user interface but that's putting the horse behind
> the cart. I much rather use the constraints at the database level to
> tell me a particular update can't be done and do that without loosing
> everything else I happened to have done in that transaction until that
> point.
>
> Any suggestions?
>
> Jan
>
>

Start a "savepoint" before each sub-update and rollback to the savepoint if the update fails, and then try again with
differentdata.  If it succeeds you then release the savepoint anad move on. 

David J.

Hi David,


> Start a "savepoint" before each sub-update and rollback to the savepoint if the update fails, and then try again with
differentdata.  If it succeeds you then release the savepoint anad move on. 

Yeah... not ideal in my case & will result in "messy" code...

Would be nice to have an option in PostgreSQL something along the lines
of:  'abort-transaction-on-constraint-violation = false'....

Jan



Re: Current transaction is aborted, commands ignored until end of transaction block

From
Leif Biberg Kristensen
Date:
 Fredag 30. desember 2011 09.43.38 skrev Jan Bakuwel :

> Would be nice to have an option in PostgreSQL something along the lines
> of:  'abort-transaction-on-constraint-violation = false'....

That option is called MySQL with MyISAM tables.

Seriously, if the user encounters a constraint violation, that is IMO a 
symptom of bad design. Such conditions should be checked and caught _before_ 
the transaction begins.

regards, Leif


Re: Current transaction is aborted, commands ignored until end of transaction block

From
Thomas Kellerer
Date:
Leif Biberg Kristensen wrote on 30.12.2011 10:44:
>   Fredag 30. desember 2011 09.43.38 skrev Jan Bakuwel :
>
>> Would be nice to have an option in PostgreSQL something along the lines
>> of:  'abort-transaction-on-constraint-violation = false'....
>
> That option is called MySQL with MyISAM tables.
>
Not true.

Oracle and others (I believe at least DB2) behave such that you can insert a bunch of rows and if one or more throw a
constraintviolation, the transaction can still be committed persisting those that do not violate the constraint.
 




Re: Re: Current transaction is aborted, commands ignored until end of transaction block

From
Marcin Mirosław
Date:
W dniu 30.12.2011 12:03, Thomas Kellerer pisze:
> Leif Biberg Kristensen wrote on 30.12.2011 10:44:
>>   Fredag 30. desember 2011 09.43.38 skrev Jan Bakuwel :
>>
>>> Would be nice to have an option in PostgreSQL something along the lines
>>> of:  'abort-transaction-on-constraint-violation = false'....
>>
>> That option is called MySQL with MyISAM tables.
>>
> Not true.
> 
> Oracle and others (I believe at least DB2) behave such that you can
> insert a bunch of rows and if one or more throw a constraint violation,
> the transaction can still be committed persisting those that do not
> violate the constraint.

Hi,
isn't this option:
http://www.postgresql.org/docs/current/static/sql-set-constraints.html ?
Regards


Re: Current transaction is aborted, commands ignored until end of transaction block

From
Thomas Kellerer
Date:
Marcin Mirosław wrote on 30.12.2011 12:07:
>>>> Would be nice to have an option in PostgreSQL something along the lines
>>>> of:  'abort-transaction-on-constraint-violation = false'....
>>>
>>> That option is called MySQL with MyISAM tables.
>>>
>> Not true.
>>
>> Oracle and others (I believe at least DB2) behave such that you can
>> insert a bunch of rows and if one or more throw a constraint violation,
>> the transaction can still be committed persisting those that do not
>> violate the constraint.
>
> Hi,
> isn't this option:
> http://www.postgresql.org/docs/current/static/sql-set-constraints.html ?
> Regards
>
Not that's something different.
It would still prevent comitting the transaction if the constraint check fails at the end.

This strict transaction concept is somewhat irritating when you come from other DBMS (such as Oracle or DB2).
Using savepoints is the only option to "simulate" that behaviour in PostgreSQL (and then the constraints need to be
immediate)

Thomas




Re: Current transaction is aborted, commands ignored until end of transaction block

From
Leif Biberg Kristensen
Date:
Fredag 30. desember 2011 05.25.22 skrev Jan Bakuwel :
> Of course I can start testing existing values in the database before
> accepting them in the user interface but that's putting the horse behind
> the cart. I much rather use the constraints at the database level to
> tell me a particular update can't be done and do that without loosing
> everything else I happened to have done in that transaction until that
> point.

Here's an example from a plpgsql function I wrote, where a possible violation 
of unique constraint on (parent_id, source_text) is checked within the 
transaction:

SELECT source_id FROM sources WHERE parent_id = par_id AND source_text = txt 
INTO x;
IF NOT FOUND THEN   INSERT INTO sources (parent_id, source_text, sort_order, source_date, 
part_type)       VALUES (par_id, txt, srt, true_date_extract(txt), pt) RETURNING 
source_id INTO src_id;
ELSE   RAISE NOTICE 'Source % has the same parent id and text as you tried to 
enter.', x;   RETURN -x; -- abort the transaction and return the offended source id as a 
negative number.
END IF;

I don't know if it's considered good form to issue a RETURN in the middle of a 
function on an error condition, but the main point is that you can take an 
alternate action when the violation is about to happen. Before I introduced 
this test, the PHP interface just barfed all over the place with "transaction 
aborted" messages.

Here's another test from the same function, where the alternate action is 
basically a no-op:

-- don't violate unique constraint on (source_fk, event_fk) in the 
event_citations table.
-- if this source-event association already exists, it's rather pointless to 
repeat it.
PERFORM * FROM event_citations WHERE event_fk = event AND source_fk = src_id;   IF NOT FOUND THEN       INSERT INTO
event_citations(event_fk, source_fk) VALUES (event, 
 
src_id);   ELSE       RAISE NOTICE 'citation exists';   END IF;

regards, Leif


Hi Leif,

On 30/12/11 22:44, Leif Biberg Kristensen wrote:
>  Fredag 30. desember 2011 09.43.38 skrev Jan Bakuwel :
>
>> Would be nice to have an option in PostgreSQL something along the lines
>> of:  'abort-transaction-on-constraint-violation = false'....
> That option is called MySQL with MyISAM tables.
>
> Seriously, if the user encounters a constraint violation, that is IMO a
> symptom of bad design. Such conditions should be checked and caught _before_
> the transaction begins.

Really?

One of my detail tables here is a list of codes. The design currently is
so that you are not allowed to add two identical codes in that table for
a particular related master record, ie. if you try it raises a
constraint violation (duplicate key). Users try anyway (you know those
pesky users doing things they're not supposed to do).

Why would that a bad design?

I simply want to tell the user: sorry you can't do this because it
violates a constraint (duplicate key).

Sometimes they try to delete something that has other records referring
to it and the database design is so that it won't cascade delete (for
various reasons). In that case I want to tell them: sorry you can't do
this because there are related records.

In a well designed system, you'd have those constraints at the database
level not the application level and use exception handling to deal with
these, not write tests to find out the possible error conditions
beforehand. Of course it's possible to write code around all of this
(and I'm starting to realise that is what I might have to do) but I
consider that bad design.

I don't claim to know all other RDBMS but I think PostgreSQL might be
one of the few (or only one) that considers a constraint violation
something really really really serious... so serious that the
transaction will have to be aborted. Quite a few other RDBMS will give
you the error but will also allow you to continue on your merry way (and
not loose everything you've done up to that point).

Why-o-why have the PostgreSQL developers decided to do it this way...?

regards,
Jan



Jan Bakuwel <jan.bakuwel@greenpeace.org> writes:
> Why-o-why have the PostgreSQL developers decided to do it this way...?

Because starting and cleaning up a subtransaction is an expensive thing.
If we had auto-rollback at the statement level, you would be paying that
overhead for every statement in every transaction, whether you need it
or not (since obviously there's no way to forecast in advance whether a
statement will fail).  Making it depend on explicit savepoints allows
the user/application to control whether that overhead is expended or
not.

If you want to pay that price all the time, there are client-side
frameworks that will do it for you, or you can roll your own easily
enough.  So we do not see it as a big deal that the database server
itself doesn't act that way.
        regards, tom lane


On Fri, Dec 30, 2011 at 4:19 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Jan Bakuwel <jan.bakuwel@greenpeace.org> writes:
>> Why-o-why have the PostgreSQL developers decided to do it this way...?
>
> Because starting and cleaning up a subtransaction is an expensive thing.
> If we had auto-rollback at the statement level, you would be paying that
> overhead for every statement in every transaction, whether you need it
> or not (since obviously there's no way to forecast in advance whether a
> statement will fail).  Making it depend on explicit savepoints allows
> the user/application to control whether that overhead is expended or
> not.
>
> If you want to pay that price all the time, there are client-side
> frameworks that will do it for you, or you can roll your own easily
> enough.  So we do not see it as a big deal that the database server
> itself doesn't act that way.

Having used PostgreSQL a LOT, I find that being able to throw an
entire update at the db and having it fail / be rolled back / CTRL-C
out of and fix the problem is actually much less work than the
frameworks for other databases.  Once you've chased down bad data in a
load file a few times, it's really pretty easy to spot and fix these
issues and just run the whole transaction again.  Since PostgreSQL
doesn't have a very big penalty for rolling back a whole transaction
it's not that bad.  Some dbs, like MySQL with innodb table handler
have a 10:1 or greater penalty for rollbacks.  Insert a million rows
in innodb then issue a rollback and go get a sandwich.  In PostgreSQL
a rollback is generally instantaneous, with the only real cost being
bloat in the tables or indexes.


Re: Current transaction is aborted, commands ignored until end of transaction block

From
Samuel Gendler
Date:


On Fri, Dec 30, 2011 at 4:03 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote:
On Fri, Dec 30, 2011 at 4:19 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Jan Bakuwel <jan.bakuwel@greenpeace.org> writes:
>> Why-o-why have the PostgreSQL developers decided to do it this way...?
>
> Because starting and cleaning up a subtransaction is an expensive thing.
> If we had auto-rollback at the statement level, you would be paying that
> overhead for every statement in every transaction, whether you need it
> or not (since obviously there's no way to forecast in advance whether a
> statement will fail).  Making it depend on explicit savepoints allows
> the user/application to control whether that overhead is expended or
> not.
>
> If you want to pay that price all the time, there are client-side
> frameworks that will do it for you, or you can roll your own easily
> enough.  So we do not see it as a big deal that the database server
> itself doesn't act that way.

Having used PostgreSQL a LOT, I find that being able to throw an
entire update at the db and having it fail / be rolled back / CTRL-C
out of and fix the problem is actually much less work than the
frameworks for other databases.  Once you've chased down bad data in a
load file a few times, it's really pretty easy to spot and fix these
issues and just run the whole transaction again.  Since PostgreSQL
doesn't have a very big penalty for rolling back a whole transaction
it's not that bad.  Some dbs, like MySQL with innodb table handler
have a 10:1 or greater penalty for rollbacks.  Insert a million rows
in innodb then issue a rollback and go get a sandwich.  In PostgreSQL
a rollback is generally instantaneous, with the only real cost being
bloat in the tables or indexes.

More to the point - if a statement is truly independent of all the other statements in a transaction, it would seem that the transaction itself is poorly defined.  The whole point of a transaction is to define an atomic unit of work. If you don't care about atomicity, enable auto commit and just catch the constraint violation exception and continue on your merry way.  Yes, on occasion, working around the way postgresql functions causes extra work for a developer (I don't think anyone is suggesting that it should change the end user experience, as was sort-of implied by one response on this thread), but so too can code which is not atomic cause extra work for a developer - and transactions are intended to be atomic, so it makes far more sense to me to implement it the postgres way and incur the modicum of extra developer overhead in the few cases where I may want to deal with acceptable constraint violations rather than in the many cases where I want a transaction to be atomic.
 
In the example of users adding a new value to an enumerated list in the same unit of work as other rows are inserted in, it is likely not too much work to use a trigger to check the insert prior to executing it - assuming that list is in another table with just a foreign key going into the table the majority of your inserts are going to.  Alternatively, if you aren't doing a bulk insert via a copy, it probably isn't too much work to construct the set of inserts needed for the joined table separately and issue those in separate transactions before doing the main transaction.


Hi,

Thanks for all having responded to my mail.

I understand there's no way around it at the moment so I'll have to
start writing some code to deal with this behaviour.

cheers!
Jan





On Fri, Dec 30, 2011 at 6:19 PM, Samuel Gendler
<sgendler@ideasculptor.com> wrote:
>
>
> On Fri, Dec 30, 2011 at 4:03 PM, Scott Marlowe <scott.marlowe@gmail.com>
> wrote:
>>
>> On Fri, Dec 30, 2011 at 4:19 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> > Jan Bakuwel <jan.bakuwel@greenpeace.org> writes:
>> >> Why-o-why have the PostgreSQL developers decided to do it this way...?
>> >
>> > Because starting and cleaning up a subtransaction is an expensive thing.
>> > If we had auto-rollback at the statement level, you would be paying that
>> > overhead for every statement in every transaction, whether you need it
>> > or not (since obviously there's no way to forecast in advance whether a
>> > statement will fail).  Making it depend on explicit savepoints allows
>> > the user/application to control whether that overhead is expended or
>> > not.
>> >
>> > If you want to pay that price all the time, there are client-side
>> > frameworks that will do it for you, or you can roll your own easily
>> > enough.  So we do not see it as a big deal that the database server
>> > itself doesn't act that way.
>>
>> Having used PostgreSQL a LOT, I find that being able to throw an
>> entire update at the db and having it fail / be rolled back / CTRL-C
>> out of and fix the problem is actually much less work than the
>> frameworks for other databases.  Once you've chased down bad data in a
>> load file a few times, it's really pretty easy to spot and fix these
>> issues and just run the whole transaction again.  Since PostgreSQL
>> doesn't have a very big penalty for rolling back a whole transaction
>> it's not that bad.  Some dbs, like MySQL with innodb table handler
>> have a 10:1 or greater penalty for rollbacks.  Insert a million rows
>> in innodb then issue a rollback and go get a sandwich.  In PostgreSQL
>> a rollback is generally instantaneous, with the only real cost being
>> bloat in the tables or indexes.
>
>
> More to the point - if a statement is truly independent of all the other
> statements in a transaction, it would seem that the transaction itself is
> poorly defined.  The whole point of a transaction is to define an atomic
> unit of work. If you don't care about atomicity, enable auto commit and just
> catch the constraint violation exception and continue on your merry way.

But the performance penalty for autocommit is huge.  It's still almost
always faster to run a single big transaction and fix errors than to
do single commits when you're doing a large import.


On 2011-12-30, Jan Bakuwel <jan.bakuwel@greenpeace.org> wrote:
> This is a cryptographically signed message in MIME format.
>

> What annoys me is that I don't think that a constraint violation made by
> a user should result in an aborted transaction. There is probably a very
> good reason to do that however the logic escapes me...

the reason for it is it allows several updates (or other DML) to be
run without checking for success and then success only checked at the
commit stage. this makes it easier to the DBA to enforce databse
consistancy against wayward applications and not suffer from partial
inserts.

> Of course I can start testing existing values in the database before
> accepting them in the user interface but that's putting the horse behind
> the cart. I much rather use the constraints at the database level to
> tell me a particular update can't be done and do that without loosing
> everything else I happened to have done in that transaction until that
> point.
>
> Any suggestions?

checkpoints can probably do what you want, but long-lived transactions 
are a bad idea in general, especially if you expect to have several 
physical users accessing your database simultaneously. 

-- 
⚂⚃ 100% natural



Re: Current transaction is aborted, commands ignored until end of transaction block

From
Feike Steenbergen
Date:
I recently started receiving this error as well, this was because I
disabled autocommit.
With the following option in .psqlrc the error doesn't wait for a
rollback but automatically creates a savepoint allowing you to fix the
error and continue:

This is now in my .psqlrc:

\set AUTOCOMMIT off
\set ON_ERROR_ROLLBACK on

http://www.postgresql.org/docs/9.1/static/app-psql.html look for
ON_ERROR_ROLLBACK

"When on, if a statement in a transaction block generates an error,
the error is ignored and the transaction continues. When interactive,
such errors are only ignored in interactive sessions, and not when
reading script files."