Re: Inserting from multiple processes? - Mailing list pgsql-general

From Francisco Olarte
Subject Re: Inserting from multiple processes?
Date
Msg-id CA+bJJbwOjmbAddPqauDZsi0k-SHP566bfXbBPDXHQyDchr6wBw@mail.gmail.com
Whole thread Raw
In response to Re: Inserting from multiple processes?  (Dave Johansen <davejohansen@gmail.com>)
List pgsql-general
Hi Dave:

On Mon, Jun 29, 2015 at 6:32 AM, Dave Johansen <davejohansen@gmail.com> wrote:
> The issue is that the following uses 5 XIDs when I would only expect it to
> us 1:
> BEGIN;
> SELECT insert_test_no_dup('2015-01-01', 1, 1);
....
> END;

I see.

> It appears that the unique violation that is caught and ignored increments
> the XID even though I didn't expect that to happen. I agree that our
> software was burning XIDs needlessly and Postgres handled this situation as
> best as it could. It also sounds like Postgres 9.5 adds features to support
> this sort of use more efficiently, but the XID incrementing on the unique
> violation seems like it could/should be fixed, if it hasn't been already.

IIRC you were using BEGIN/EXCEPTION, which I think uses a savepoint
internally, which maybe what is burning the xid on every execution (
it probably needs one to implement rollback to savepoint properly ).
I've done a simple test which burns one very time the exception is
raised ( using a division by zero ).

If this is your case you may be able to work around it using a
conditional insert instead of an exception, and as you are using a
function the potential ugliness will remain encapsulated ( it may even
be faster, as the docs explicitly say exception blocks are expensive,
but as usual YMMV depending on the exact query and the collision ratio
).

Francisco Olarte.


pgsql-general by date:

Previous
From: Andreas Joseph Krogh
Date:
Subject: Need for re-index after pg_upgrade
Next
From: Xavier 12
Date:
Subject: Re: pg_xlog on a hot_stanby slave