Thread: problems with transaction blocks

problems with transaction blocks

From
"Chris Ochs"
Date:
I want to do a series of inserts within a single transaction block, but with
postgresql if one insert fails, the whole block is aborted.  Is there any
way to get around this behavior so that postgresql won't abort the entire
transaction if a single insert returns an error?

Chris


Re: problems with transaction blocks

From
"Chris Travers"
Date:
Transactions are atomic.  What you are asking to do violates the whole
concept of a transaction.

You can, however, do these inserts outside of the transaction block.

Best Wishes,
Chris Travers
----- Original Message -----
From: "Chris Ochs" <chris@paymentonline.com>
To: <pgsql-general@postgresql.org>
Sent: Wednesday, January 07, 2004 7:52 AM
Subject: [GENERAL] problems with transaction blocks


> I want to do a series of inserts within a single transaction block, but
with
> postgresql if one insert fails, the whole block is aborted.  Is there any
> way to get around this behavior so that postgresql won't abort the entire
> transaction if a single insert returns an error?
>
> Chris
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org
>
>


Re: problems with transaction blocks

From
Bruno Wolff III
Date:
On Tue, Jan 06, 2004 at 16:52:12 -0800,
  Chris Ochs <chris@paymentonline.com> wrote:
> I want to do a series of inserts within a single transaction block, but with
> postgresql if one insert fails, the whole block is aborted.  Is there any
> way to get around this behavior so that postgresql won't abort the entire
> transaction if a single insert returns an error?

Currently there is no provision for recovery from error by the application
inside a transaction. What you can do is have the application check for
problems before trying the insert. Depending on the problems you expect,
you might be able to avoid trying an insert which will fail in almost
all cases.

Re: problems with transaction blocks

From
"scott.marlowe"
Date:
Another good way to handle this is to put a trigger on the table that
diverts inserts that would fail to a holding table.  While this will slow
down the inserts, it will allow you to insert large lists of dubious
quality and worry about the bad rows later.

My preference is to fix the data feed, or pre-process it with PHP/Perl to
split it into two files ahead of time, but I'm more of a coder than a dba.
I get a lot of data to import from other sources at work, and it's often
easier to make the sources fix their data feeds than it is to try and
massage them each and every time.

On Wed, 7 Jan 2004, Chris Travers wrote:

> Transactions are atomic.  What you are asking to do violates the whole
> concept of a transaction.
>
> You can, however, do these inserts outside of the transaction block.
>
> Best Wishes,
> Chris Travers
> ----- Original Message -----
> From: "Chris Ochs" <chris@paymentonline.com>
> To: <pgsql-general@postgresql.org>
> Sent: Wednesday, January 07, 2004 7:52 AM
> Subject: [GENERAL] problems with transaction blocks
>
>
> > I want to do a series of inserts within a single transaction block, but
> with
> > postgresql if one insert fails, the whole block is aborted.  Is there any
> > way to get around this behavior so that postgresql won't abort the entire
> > transaction if a single insert returns an error?
> >
> > Chris
> >
> >
> > ---------------------------(end of broadcast)---------------------------
> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org
> >
> >
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 5: Have you checked our extensive FAQ?
>
>                http://www.postgresql.org/docs/faqs/FAQ.html
>


Re: problems with transaction blocks

From
Rodrigo Malara
Date:
Another way is break the transaction. Instead of consisting of many
inserts, each insert is a transaction itself.
Do a Begin and a Commit (or rollback) circling the insert statement.
HTH
Rodrigo Malara
Em Qua, 2004-01-07 às 14:41, scott.marlowe escreveu:
> Another good way to handle this is to put a trigger on the table that
> diverts inserts that would fail to a holding table.  While this will slow
> down the inserts, it will allow you to insert large lists of dubious
> quality and worry about the bad rows later.
>
> My preference is to fix the data feed, or pre-process it with PHP/Perl to
> split it into two files ahead of time, but I'm more of a coder than a dba.
> I get a lot of data to import from other sources at work, and it's often
> easier to make the sources fix their data feeds than it is to try and
> massage them each and every time.
>
> On Wed, 7 Jan 2004, Chris Travers wrote:
>
> > Transactions are atomic.  What you are asking to do violates the whole
> > concept of a transaction.
> >
> > You can, however, do these inserts outside of the transaction block.
> >
> > Best Wishes,
> > Chris Travers
> > ----- Original Message -----
> > From: "Chris Ochs" <chris@paymentonline.com>
> > To: <pgsql-general@postgresql.org>
> > Sent: Wednesday, January 07, 2004 7:52 AM
> > Subject: [GENERAL] problems with transaction blocks
> >
> >
> > > I want to do a series of inserts within a single transaction block, but
> > with
> > > postgresql if one insert fails, the whole block is aborted.  Is there any
> > > way to get around this behavior so that postgresql won't abort the entire
> > > transaction if a single insert returns an error?
> > >
> > > Chris
> > >
> > >
> > > ---------------------------(end of broadcast)---------------------------
> > > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org
> > >
> > >
> >
> >
> > ---------------------------(end of broadcast)---------------------------
> > TIP 5: Have you checked our extensive FAQ?
> >
> >                http://www.postgresql.org/docs/faqs/FAQ.html
> >
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 7: don't forget to increase your free space map settings



Re: problems with transaction blocks

From
Vivek Khera
Date:
>>>>> "RM" == Rodrigo Malara <rodrigomalara@yahoo.com.br> writes:

RM> Another way is break the transaction. Instead of consisting of many
RM> inserts, each insert is a transaction itself.
RM> Do a Begin and a Commit (or rollback) circling the insert statement.

that will destroy your performance if you do it a lot.  and there's no
point in doing an explicit begin/commit since PG does that implicitly
for each statement not inside a transaction already.

--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D.                Khera Communications, Inc.
Internet: khera@kciLink.com       Rockville, MD  +1-301-869-4449 x806
AIM: vivekkhera Y!: vivek_khera   http://www.khera.org/~vivek/

Re: problems with transaction blocks

From
Vivek Khera
Date:
>>>>> "BW" == Bruno Wolff, <Bruno> writes:

BW> Currently there is no provision for recovery from error by the application
BW> inside a transaction. What you can do is have the application check for
BW> problems before trying the insert. Depending on the problems you expect,

I have an application that does this.  First it attempts to do a mass
number of inserts based on some external events all inside a
transaction.  If we abort due to referential integrity constraint
violation, we restart the whole process but before each insert a check
is done to see if the required FK's are satisfied.

This gives us the benefit of 99% of the time when the FKs are ok we
zip along pretty darned fast, and for the 1% of the time when some
stale data is re-injected into the stream, we just restart that batch
and pay the penalty for it.

The benefit of being able to do all the inserts within a single
begin/end cannot be understated.


--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D.                Khera Communications, Inc.
Internet: khera@kciLink.com       Rockville, MD  +1-301-869-4449 x806
AIM: vivekkhera Y!: vivek_khera   http://www.khera.org/~vivek/