Re: Optimising inside transactions - Mailing list pgsql-novice

From Tom Lane
Subject Re: Optimising inside transactions
Date
Msg-id 10113.1023896190@sss.pgh.pa.us
Whole thread Raw
In response to Optimising inside transactions  (John Taylor <postgres@jtresponse.co.uk>)
Responses Re: Optimising inside transactions
List pgsql-novice
John Taylor <postgres@jtresponse.co.uk> writes:
> I'm running a transaction with about 1600 INSERTs.
> Each INSERT involves a subselect.

> I've noticed that if one of the INSERTs fails, the remaining INSERTs run in about
> 1/2 the time expected.

> Is postgresql optimising the inserts, knowing that it will rollback at the end ?

> If not, why do the queries run faster after the failure ?

Queries after the failure aren't run at all; they're only passed through
the parser's grammar so it can look for a COMMIT or ROLLBACK command.
Normal processing resumes after ROLLBACK.  If you were paying attention
to the return codes you'd notice complaints like

regression=# begin;
BEGIN
regression=# select 1/0;
ERROR:  floating point exception! The last floating point operation either exceeded legal ranges or was a divide by
zero
-- subsequent queries will be rejected like so:
regression=# select 1/0;
WARNING:  current transaction is aborted, queries ignored until end of transaction block
*ABORT STATE*

I'd actually expect much more than a 2:1 speed differential, because the
grammar is not a significant part of the runtime AFAICT.  Perhaps you
are including some large amount of communication overhead in that
comparison?

            regards, tom lane

pgsql-novice by date:

Previous
From: Tom Lane
Date:
Subject: Re: How efficient are Views
Next
From: "Henshall, Stuart - WCP"
Date:
Subject: Re: How efficient are Views