Thread: ROLLBACK automatically

ROLLBACK automatically

From
hstenger@adinet.com.uy
Date:
Hi you all!

I nedd to figure out how to instruct PostgreSQL to ROLLBACK automatically when
something goes bad within a transaction (i.e. it fell in *ABORT STATE*).

Do you think a trigger will do?

Do you thing only a hack will do? (Like calling the rollback code after calling
the aborting code)

Or is there a set I don't know about?

Thanx!!

Cheers,
Haroldo.
--
----------------------+------------------------
 Haroldo Stenger      | hstenger@ieee.org
 Montevideo, Uruguay. | hstenger@adinet.com.uy
----------------------+------------------------
 Visit UYLUG Web Site: http://www.linux.org.uy
-----------------------------------------------

Re: ROLLBACK automatically

From
"Alex Bolenok"
Date:
> Hi you all!
>
> I nedd to figure out how to instruct PostgreSQL to ROLLBACK automatically
when
> something goes bad within a transaction (i.e. it fell in *ABORT STATE*).
>
> Do you think a trigger will do?
>
> Do you thing only a hack will do? (Like calling the rollback code after
calling
> the aborting code)
>
> Or is there a set I don't know about?
>
> Thanx!!
>
> Cheers,
> Haroldo.

If something bad happens inside the transaction, all previous changes made
within the transaction are discarded, even if you COMMIT the changes. See
output:

peroon=# SELECT * FROM foo;
 id | name
----+------
(0 rows)

peroon=# BEGIN;
BEGIN
peroon=# INSERT INTO foo VALUES (1, 'Some value');
INSERT 255330 1
peroon=# SELECT * FROM foo;
 id |    name
----+------------
  1 | Some value
(1 row)

peroon=# INSERT INTO foo VALUES (1, 'The same value');
ERROR:  Cannot insert a duplicate key into unique index foo_pkey
peroon=# COMMIT;
COMMIT
peroon=# SELECT * FROM foo;
 id | name
----+------
(0 rows)

So when your transaction is in ABORT STATE, you may use COMMIT, ROLLBACK or
ABORT to close it, and you will get the same result.

If exception happens inside a function or trigger, the whole stack is rolled
back automatically (PostgreSQL doesn't support nested tranactions), so you
don't have to worry about rolling it back manually.

BTW, ABORT is just an alias for ROLLBACK in PostgreSQL, so 'rollback code'
and 'aborting code' you wrote about do the same things :)

Alex Bolenok.


Re: ROLLBACK automatically

From
Kshipra
Date:
hello all,
I would like to mention something in this regard.
I have executed all the commands given here in the same order, but what the
auther is saying that after insert fails whatever u have inserted rolls back,
this is not the case .
as all of us knows Postgre works in autocommit mode, so when user successfully
inserts a row in a table and then again tries to insert the same row then
already entered record will not get deleted from tha table. On top of
autocommit, we are executing COMMIT;
so it will further explicitely commits the first transaction and will NOT
ROLLBACK
the succefully entered row.
And that way also this should not happen in any condition, otherwise it will be
so much duplication of work.
if anything is incorrect pls rectify it , but I have just now executed all the
set of commands in the exactly same fashion as given in the reply mail by Alex.
thanks
Kshipra
Alex Bolenok wrote:

> > Hi you all!
> >
> > I nedd to figure out how to instruct PostgreSQL to ROLLBACK automatically
> when
> > something goes bad within a transaction (i.e. it fell in *ABORT STATE*).
> >
> > Do you think a trigger will do?
> >
> > Do you thing only a hack will do? (Like calling the rollback code after
> calling
> > the aborting code)
> >
> > Or is there a set I don't know about?
> >
> > Thanx!!
> >
> > Cheers,
> > Haroldo.
>
> If something bad happens inside the transaction, all previous changes made
> within the transaction are discarded, even if you COMMIT the changes. See
> output:
>
> peroon=# SELECT * FROM foo;
>  id | name
> ----+------
> (0 rows)
>
> peroon=# BEGIN;
> BEGIN
> peroon=# INSERT INTO foo VALUES (1, 'Some value');
> INSERT 255330 1
> peroon=# SELECT * FROM foo;
>  id |    name
> ----+------------
>   1 | Some value
> (1 row)
>
> peroon=# INSERT INTO foo VALUES (1, 'The same value');
> ERROR:  Cannot insert a duplicate key into unique index foo_pkey
> peroon=# COMMIT;
> COMMIT
> peroon=# SELECT * FROM foo;
>  id | name
> ----+------
> (0 rows)
>
> So when your transaction is in ABORT STATE, you may use COMMIT, ROLLBACK or
> ABORT to close it, and you will get the same result.
>
> If exception happens inside a function or trigger, the whole stack is rolled
> back automatically (PostgreSQL doesn't support nested tranactions), so you
> don't have to worry about rolling it back manually.
>
> BTW, ABORT is just an alias for ROLLBACK in PostgreSQL, so 'rollback code'
> and 'aborting code' you wrote about do the same things :)
>
> Alex Bolenok.


Re: ROLLBACK automatically

From
"Alex Bolenok"
Date:
> hello all,
> I would like to mention something in this regard.
> I have executed all the commands given here in the same order, but what
the
> auther is saying that after insert fails whatever u have inserted rolls
back,
> this is not the case .
> as all of us knows Postgre works in autocommit mode, so when user
successfully
> inserts a row in a table and then again tries to insert the same row then
> already entered record will not get deleted from tha table. On top of
> autocommit, we are executing COMMIT;
> so it will further explicitely commits the first transaction and will NOT
> ROLLBACK
> the succefully entered row.
> And that way also this should not happen in any condition, otherwise it
will be
> so much duplication of work.
> if anything is incorrect pls rectify it , but I have just now executed all
the
> set of commands in the exactly same fashion as given in the reply mail by
Alex.
> thanks
> Kshipra

Autocommit mode means that when you run a query _outside_ the transaction
block, each statement of the query starts its own transaction block
implicitly, executes itself and then commits the transaction. When you
_explicitly_ start the transaction block with BEGIN statement, if will not
be commited until you _explicitly_ commit it with COMMIT statement.

Try to perform following statements (that means _all_ statements, including
BEGIN and COMMIT):

peroon=# CREATE TABLE foo (id INT4 PRIMARY KEY, name TEXT);
NOTICE:  CREATE TABLE/PRIMARY KEY will create implicit index 'foo_pkey' for
table 'foo'
CREATE
peroon=# BEGIN;
BEGIN
peroon=# INSERT INTO foo VALUES (1, 'Some value');
INSERT 258925 1
peroon=# SELECT * FROM foo;
 id |    name
----+------------
  1 | Some value
(1 row)

peroon=# INSERT INTO foo VALUES (1, 'The same value');
ERROR:  Cannot insert a duplicate key into unique index foo_pkey
peroon=# COMMIT;
COMMIT
peroon=# SELECT * FROM foo;
 id | name
----+------
(0 rows)

What have we done?

First we created table foo with id as PRIMARY KEY field. A unique index was
created for this field, so if we try to insert a duplicate value into this
field we will get an error.

Then we started an explicit transaction block with BEGIN statement and
inserted a value into the PRIMARY KEY field of the table foo. This operation
completed successfully and when we SELECT'ed all values from this table we
found the record we had just inserted.

Then we inserted the duplicate value into id field. This action violated
UNIQUE INDEX created by the PRIMARY KEY constraint and resulted in error.
The transacion falled into ABORT STATE. All queries in this state are
ignored until we ROLLBACK (or COMMIT, or ABORT) this transaction manually.

Then we performed a commit statement. It commited nothing, but it finished
the transaction block.

And finally we SELECT'ed all values from the table foo. As it was expected,
we found no values in it. That means that the first insert statement had
been rolled back though we didn't perform ROLLBACK but COMMIT.

Alex Bolenok.


Re: ROLLBACK automatically

From
Sevo Stille
Date:
Kshipra wrote:
>
> hello all,
> I would like to mention something in this regard.
> I have executed all the commands given here in the same order, but what the
> auther is saying that after insert fails whatever u have inserted rolls back,
> this is not the case .
> as all of us knows Postgre works in autocommit mode,

Change that to "MAY work in autocommit mode".

> as all of us knows Postgre works in autocommit mode, so when user successfully
> inserts a row in a table and then again tries to insert the same row then
> already entered record will not get deleted from tha table.

For different reasons. SQL has no notion of "inserting the same row". If
you insert something twice, you will have two rows with the same values
in your table - which may be perfectly valid in some cases.

> On top of
> autocommit, we are executing COMMIT;
> so it will further explicitely commits the first transaction

You can't commit anything twice. Autocommit implicitly creates a
transaction around each statement. Any extra explicit COMMIT outside a
transaction block won't do anything but raise a warning.

> and will NOT
> ROLLBACK
> the succefully entered row.

Nor would it inside a transaction block.

CREATE TABLE foo (t text);
INSERT into foo values ('bar');
INSERT into foo values ('bar');
SELECT * from foo;
DROP table foo;

will give you two instances of bar inside table foo, just like the
transactional version

CREATE TABLE foo (t text);
BEGIN TRANSACTION;
INSERT into foo values ('bar');
INSERT into foo values ('bar');
COMMIT;
SELECT * from foo;
DROP table foo;

would. There is a difference between

CREATE TABLE foo (i int4);
INSERT into foo values (1);
INSERT into foo values ('bar');
SELECT * from foo;
DROP table foo;

and

CREATE TABLE foo (i int4);
BEGIN TRANSACTION;
INSERT into foo values (1);
INSERT into foo values ('bar');
COMMIT;
SELECT * from foo;
DROP table foo;

in that the data type error on the second insert will make the entire
transaction fail in the second example leaving you with an empty table,
where the first statement in the first example makes it into the table,
as its autocommit transaction has already successfully comitted before
the faulty statement gets executed.

> And that way also this should not happen in any condition, otherwise it will be
> so much duplication of work.

Sorry, you lost me there...

Sevo


--
sevo@ip23.net

Re: ROLLBACK automatically

From
Chris Bitmead
Date:
And what if I didn't want the commit to fail? What if I half expected
the insert
to fail and then want to do an update instead? That's a pretty common
pattern - try
to insert, if fail - do an update instead.

Is this behaviour some kind of standard? Can it be changed?

Alex Bolenok wrote:
> Autocommit mode means that when you run a query _outside_ the transaction
> block, each statement of the query starts its own transaction block
> implicitly, executes itself and then commits the transaction. When you
> _explicitly_ start the transaction block with BEGIN statement, if will not
> be commited until you _explicitly_ commit it with COMMIT statement.
>
> Try to perform following statements (that means _all_ statements, including
> BEGIN and COMMIT):
>
> peroon=# CREATE TABLE foo (id INT4 PRIMARY KEY, name TEXT);
> NOTICE:  CREATE TABLE/PRIMARY KEY will create implicit index 'foo_pkey' for
> table 'foo'
> CREATE
> peroon=# BEGIN;
> BEGIN
> peroon=# INSERT INTO foo VALUES (1, 'Some value');
> INSERT 258925 1
> peroon=# SELECT * FROM foo;
>  id |    name
> ----+------------
>   1 | Some value
> (1 row)
>
> peroon=# INSERT INTO foo VALUES (1, 'The same value');
> ERROR:  Cannot insert a duplicate key into unique index foo_pkey
> peroon=# COMMIT;
> COMMIT
> peroon=# SELECT * FROM foo;
>  id | name
> ----+------
> (0 rows)
>
> What have we done?
>
> First we created table foo with id as PRIMARY KEY field. A unique index was
> created for this field, so if we try to insert a duplicate value into this
> field we will get an error.
>
> Then we started an explicit transaction block with BEGIN statement and
> inserted a value into the PRIMARY KEY field of the table foo. This operation
> completed successfully and when we SELECT'ed all values from this table we
> found the record we had just inserted.
>
> Then we inserted the duplicate value into id field. This action violated
> UNIQUE INDEX created by the PRIMARY KEY constraint and resulted in error.
> The transacion falled into ABORT STATE. All queries in this state are
> ignored until we ROLLBACK (or COMMIT, or ABORT) this transaction manually.
>
> Then we performed a commit statement. It commited nothing, but it finished
> the transaction block.
>
> And finally we SELECT'ed all values from the table foo. As it was expected,
> we found no values in it. That means that the first insert statement had
> been rolled back though we didn't perform ROLLBACK but COMMIT.
>
> Alex Bolenok.

Re: ROLLBACK automatically

From
Chris Bitmead
Date:
"Billy G. Allie" wrote:
>
> Chris Bitmead wrote:
> >
> > And what if I didn't want the commit to fail? What if I half expected the insert
> > to fail and then want to do an update instead? That's a pretty common pattern - try
> > to insert, if fail - do an update instead.
> >
> > Is this behaviour some kind of standard? Can it be changed?
>
> Hmmmmm.....  where to begin.
>
> You use a transaction when you have a series of related insert/updates that
> must all
> succeed or all fail.  For example, in an accounting system you make a debit to
> one account and a credit to a different account.  If done outside a
> transaction and one fails, your books are out of balance.  If done in a
> transaction, if one fails, then a rollback is done so that neither apply --
> your books remain balanced.  If your updates are not related in such a way
> that failed insert/update does not require the previous updates to be rolled
> back, perform them in seperate transactions, or outside of a transaction
> (using the autocommit feature of PostgreSQL).
>
> As for your common pattern -- it's a poor one.  The reason it's a poor one is
> that you are relying on an error condition to determine the coarse of action,

That is normally considered a very good course of action because it has
much
better performance than your solution. Actually, usually one will want
to try
an UPDATE first,  which will in many applications succeed in 99% of
cases, and
then do an INSERT on failure. In other words, only one query instead of
two
for most cases. This is a common programming pattern - try the typical
case
first and fall-back to a back-up solution on error. The alternative,
which is
trying to determine whether it will succeed has worse performance.

> but the error condition has additional side effects

As you say, it doesn't work because of the side effect. But why must we
have
this side effect? Isn't the side effect wrong? Shouldn't the application
programmer decide whether a particular error should or shouldn't cause a
rollback?

> (an aborted transaction)
> that prevent easy recovery.  A better pattern would be to do a select instead
> of an insert.  If no rows are returned, do an insert.  If row is returned, do
> an update.  A select that returns 0 rows is not an error, the transaction is
> not aborted, and you can continue using the transaction until you are ready to
> commit (or rollback) it.

Re: ROLLBACK automatically

From
"Alex Bolenok"
Date:
> And what if I didn't want the commit to fail? What if I half expected
> the insert
> to fail and then want to do an update instead? That's a pretty common
> pattern - try
> to insert, if fail - do an update instead.
>
> Is this behaviour some kind of standard? Can it be changed?

Not in this life :)

PostgreSQL does not support nested transactions. If transaction fails, the
transaction fails.

You may write a trigger instead of (or combined with) UNIQUE CHECK, like
this:

CREATE FUNCTION fn_foo_insupd() RETURNS OPAQUE AS '
DECLARE
    chk INT4;
BEGIN
    SELECT id INTO chk FROM foo WHERE id = NEW.id;
    IF FOUND THEN
        PERFORM some_action(NEW.id);
        RETURN NULL;
    END IF;
    RETURN NEW;
END;
' LANGUAGE 'plpgsql';

CREATE TRIGGER trg_foo_insupd BEFORE INSERT OR UPDATE ON foo FOR EACH ROW
EXECUTE PROCEDURE fn_foo_ins();

Note that if you insert a row into a keyed table that has a trigger, you
cannot delete this row or update the key in the same transaction.

Alex Bolenok.


Re: ROLLBACK automatically

From
Date:
"On Error Resume Next" is also a common programming "pattern".  It doesn't
make it right.  I believe Mr. Allie misspoke when he said that the problem
with using error handling to direct normal program flow was in the
"additional side-effects".  The problem is not with additional
side-effects, but rather with alternative causes.  That is, any number of
things can cause an UPDATE to fail (including your fat-fingering the SQL
statement).  When you assume that an error is caused by a particular event
and deal with it "elegantly" you move the error out of the realm of
functional errors (which are detectable) and into the realm of logic errors
(which are not detectable).  That is why relying on error handling to
direct normal program flow ranks up there with reusing variables on the
list programming don'ts.

Mr. Allie is correct, the *proper* way to address this issue is with a
preliminary SELECT.

(Am I too dogmatic?)
David Boerwinkle

-----Original Message-----
From: Chris Bitmead <chrisb@nimrod.itg.telstra.com.au>
To: Billy G. Allie <bga@mug.org>
Cc: Alex Bolenok <abolen@chat.ru>; Kshipra <kshipra@mahindrabt.com>;
pgsql-general <pgsql-general@postgresql.org>
Date: Monday, July 24, 2000 11:31 PM
Subject: Re: [GENERAL] ROLLBACK automatically


"Billy G. Allie" wrote:
>
> Chris Bitmead wrote:
> >
> > And what if I didn't want the commit to fail? What if I half expected
the insert
> > to fail and then want to do an update instead? That's a pretty common
pattern - try
> > to insert, if fail - do an update instead.
> >
> > Is this behaviour some kind of standard? Can it be changed?
>
> Hmmmmm.....  where to begin.
>
> You use a transaction when you have a series of related insert/updates
that
> must all
> succeed or all fail.  For example, in an accounting system you make a
debit to
> one account and a credit to a different account.  If done outside a
> transaction and one fails, your books are out of balance.  If done in a
> transaction, if one fails, then a rollback is done so that neither
apply --
> your books remain balanced.  If your updates are not related in such a
way
> that failed insert/update does not require the previous updates to be
rolled
> back, perform them in seperate transactions, or outside of a transaction
> (using the autocommit feature of PostgreSQL).
>
> As for your common pattern -- it's a poor one.  The reason it's a poor
one is
> that you are relying on an error condition to determine the coarse of
action,

That is normally considered a very good course of action because it has
much
better performance than your solution. Actually, usually one will want
to try
an UPDATE first,  which will in many applications succeed in 99% of
cases, and
then do an INSERT on failure. In other words, only one query instead of
two
for most cases. This is a common programming pattern - try the typical
case
first and fall-back to a back-up solution on error. The alternative,
which is
trying to determine whether it will succeed has worse performance.

> but the error condition has additional side effects

As you say, it doesn't work because of the side effect. But why must we
have
this side effect? Isn't the side effect wrong? Shouldn't the application
programmer decide whether a particular error should or shouldn't cause a
rollback?

> (an aborted transaction)
> that prevent easy recovery.  A better pattern would be to do a select
instead
> of an insert.  If no rows are returned, do an insert.  If row is
returned, do
> an update.  A select that returns 0 rows is not an error, the transaction
is
> not aborted, and you can continue using the transaction until you are
ready to
> commit (or rollback) it.


Re: ROLLBACK automatically

From
Peter Eisentraut
Date:
Chris Bitmead writes:

> Is this behaviour some kind of standard? Can it be changed?

In the past the answer to the question "Does a statement-level error
warrant a transaction abort?" has usually hinged on the interpretation of
the following clauses of the SQL92 standard.

    3.3.4.1  Exceptions

    The phrase "an exception condition is raised:", followed by the
    name of a condition, is used in General Rules and elsewhere to
    indicate that the execution of a statement is unsuccessful, ap-
    plication of General Rules, other than those of Subclause 12.3,
    "<procedure>", and Subclause 20.1, "<direct SQL statement>", may
    be terminated, diagnostic information is to be made available,
    and execution of the statement is to have no effect on SQL-data or
    schemas.


    4.28 SQL-transactions

    [...]

    The execution of a <rollback statement> may be initiated implicitly
    by an implementation when it detects unrecoverable errors. When
    such an error occurs, an exception condition is raised: transaction
    rollback with an implementation-defined subclass code.


So essentially we classify all errors as unrecoverable, which is certainly
dumb, but legal. But then we still don't comply because we don't execute
the rollback automatically but instead hang in some sort of "invalid"
state.

There is certainly big demand for getting rid of this restriction though,
but the code changes could end up being very extensive.


--
Peter Eisentraut                  Sernanders väg 10:115
peter_e@gmx.net                   75262 Uppsala
http://yi.org/peter-e/            Sweden