Re: Aborting transaction on error - Mailing list pgsql-general

From Greg Sabino Mullane
Subject Re: Aborting transaction on error
Date
Msg-id E16hAj4-0003X2-00@mclean.mail.mindspring.net
Whole thread Raw
In response to Aborting transaction on error  (Mark Rae <m.rae@inpharmatica.co.uk>)
List pgsql-general
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1


> Why does postgres abort the current transaction on error?

That's what it should do. An error means that the current command could
not be completed, thus there is no way the transaction can continue. You
cannot rollback part of a transaction: it is all or nothing within that
transaction. However, I suspect that is not quite what you are asking...

> More specifically, I have some DBI scripts which under Oracle
> just insert rows into a table with a unique constraint. If they
> come across a duplicate row they detect and report this error and
> continue. (The duplicates are discarded as they are not needed)

So basically Oracle is reporting the problem and then moving on to
the next record. If each insert is a separate transaction, this is
perfectly understandable behavior.

> However in postgres, it automatically aborts the transaction, so is
> checking for the existance for a duplicate row beforehand the only way
> to avoid this?

Yes, that is definitely one way. It sounds as if all the inserts are inside
one transaction, thus any problem with insert #50 causes 1-49 to be
invalidated as the entire transaction fails. One way to get around this is
to make every insert its own transaction. Either set AutoCommit to true
when connecting to the database, or issue a COMMIT after each successful
insert. As far as being able to continue after an error is detected (e.g.
violation of a unique constraint) make sure the the RaiseError attribute
is set to false, and have a way to catch the errors yourself.

If time is critical, you can do a bulk check of all your values beforehand,
then load all the good ones as a single transaction, or use the COPY FROM
function for maximum speed. You could even do something such as dropping
the constraint, loading the database, flushing the duplicates, and
reimplementing the constraint.

If time is not that critical, I'd go with the simplest option: setting
AutoCommit to true and letting the duplicates simply fail.

Greg Sabino Mullane  greg@turnstep.com
PGP Key: 0x14964AC8 200203020947

-----BEGIN PGP SIGNATURE-----
Comment: http://www.turnstep.com/pgp.html

iD8DBQE8gOa9vJuQZxSWSsgRAsoEAJ9l/WrZJblfGA8HkkhlPtHNwqcGSwCeNS4e
R6NRcBtKUkEq5N+VeiwM2Hw=
=J3qz
-----END PGP SIGNATURE-----



pgsql-general by date:

Previous
From: Holger Marzen
Date:
Subject: Re: Shared buffers vs large files
Next
From: Jeff Fitzmyers
Date:
Subject: Re: where is my bottleneck?