Thread: Optimising inside transactions

Optimising inside transactions

From
John Taylor
Date:
Hi,

I'm running a transaction with about 1600 INSERTs.
Each INSERT involves a subselect.

I've noticed that if one of the INSERTs fails, the remaining INSERTs run in about
1/2 the time expected.

Is postgresql optimising the inserts, knowing that it will rollback at the end ?

If not, why do the queries run faster after the failure ?

Thanks
JohnT

Re: Optimising inside transactions

From
"Henshall, Stuart - WCP"
Date:

When one statement has an error then you should see soemthing like the following returned for the rest:
NOTICE:  current transaction is aborted, queries ignored until end of transaction block
*ABORT STATE*
Meaning none of the rest of the queries are executed.
hth,
- Stuart

> -----Original Message-----
> From: John Taylor [mailto:postgres@jtresponse.co.uk]
>
>
> Hi,
>
> I'm running a transaction with about 1600 INSERTs.
> Each INSERT involves a subselect.
>
> I've noticed that if one of the INSERTs fails, the remaining
> INSERTs run in about
> 1/2 the time expected.
>
> Is postgresql optimising the inserts, knowing that it will
> rollback at the end ?
>
> If not, why do the queries run faster after the failure ?
>
> Thanks
> JohnT

Re: Optimising inside transactions

From
Tom Lane
Date:
John Taylor <postgres@jtresponse.co.uk> writes:
> I'm running a transaction with about 1600 INSERTs.
> Each INSERT involves a subselect.

> I've noticed that if one of the INSERTs fails, the remaining INSERTs run in about
> 1/2 the time expected.

> Is postgresql optimising the inserts, knowing that it will rollback at the end ?

> If not, why do the queries run faster after the failure ?

Queries after the failure aren't run at all; they're only passed through
the parser's grammar so it can look for a COMMIT or ROLLBACK command.
Normal processing resumes after ROLLBACK.  If you were paying attention
to the return codes you'd notice complaints like

regression=# begin;
BEGIN
regression=# select 1/0;
ERROR:  floating point exception! The last floating point operation either exceeded legal ranges or was a divide by
zero
-- subsequent queries will be rejected like so:
regression=# select 1/0;
WARNING:  current transaction is aborted, queries ignored until end of transaction block
*ABORT STATE*

I'd actually expect much more than a 2:1 speed differential, because the
grammar is not a significant part of the runtime AFAICT.  Perhaps you
are including some large amount of communication overhead in that
comparison?

            regards, tom lane

Re: Optimising inside transactions

From
John Taylor
Date:
On Wednesday 12 June 2002 16:36, Tom Lane wrote:
> John Taylor <postgres@jtresponse.co.uk> writes:
> > I'm running a transaction with about 1600 INSERTs.
> > Each INSERT involves a subselect.
>
> > I've noticed that if one of the INSERTs fails, the remaining INSERTs run in about
> > 1/2 the time expected.
>
> > Is postgresql optimising the inserts, knowing that it will rollback at the end ?
>
> > If not, why do the queries run faster after the failure ?
>
> Queries after the failure aren't run at all; they're only passed through
> the parser's grammar so it can look for a COMMIT or ROLLBACK command.
> Normal processing resumes after ROLLBACK.  If you were paying attention
> to the return codes you'd notice complaints like
>
> regression=# begin;
> BEGIN
> regression=# select 1/0;
> ERROR:  floating point exception! The last floating point operation either exceeded legal ranges or was a divide by
zero
> -- subsequent queries will be rejected like so:
> regression=# select 1/0;
> WARNING:  current transaction is aborted, queries ignored until end of transaction block
> *ABORT STATE*

Well, I'm using JDBC, and it isn't throwing any exceptions, so I assumed it was working :-/

>
> I'd actually expect much more than a 2:1 speed differential, because the
> grammar is not a significant part of the runtime AFAICT.  Perhaps you
> are including some large amount of communication overhead in that
> comparison?
>

Yes, now that I think about it - I am getting a bigger differential
I'm actually running queries to update two slightly different databases in parallel,
so the failing one is taking almost no time at all.

Thanks
JohnT

Re: Optimising inside transactions

From
Manfred Koizar
Date:
On Wed, 12 Jun 2002 16:07:26 +0100, John Taylor
<postgres@jtresponse.co.uk> wrote:
>
>Hi,
>
>I'm running a transaction with about 1600 INSERTs.
>Each INSERT involves a subselect.
>
>I've noticed that if one of the INSERTs fails, the remaining INSERTs run in about
>1/2 the time expected.
>
>Is postgresql optimising the inserts, knowing that it will rollback at the end ?
>
ISTM "optimising" is not the right word, it doesn't even try to
execute them.

fred=# BEGIN;
BEGIN
fred=# INSERT INTO a VALUES (1, 'x');
INSERT 174658 1
fred=# blabla;
ERROR:  parser: parse error at or near "blabla"
fred=# INSERT INTO a VALUES (2, 'y');
NOTICE:  current transaction is aborted, queries ignored until end of
transaction block
*ABORT STATE*
fred=# ROLLBACK;
ROLLBACK

Servus
 Manfred