Re: BUG #12330: ACID is broken for unique constraints - Mailing list pgsql-hackers

From Merlin Moncure
Subject Re: BUG #12330: ACID is broken for unique constraints
Date
Msg-id CAHyXU0zjQAB5pNgZ3y=1wpWJtttw3g3XCPYHwCBWVgjBOuUyUw@mail.gmail.com
Whole thread Raw
In response to Re: BUG #12330: ACID is broken for unique constraints  (Kevin Grittner <kgrittn@ymail.com>)
Responses Re: BUG #12330: ACID is broken for unique constraints  (Kevin Grittner <kgrittn@ymail.com>)
List pgsql-hackers
On Fri, Dec 26, 2014 at 12:38 PM, Kevin Grittner <kgrittn@ymail.com> wrote:
> Tom Lane <tgl@sss.pgh.pa.us> wrote:
>
>> Just for starters, a 40XXX error report will fail to provide the
>> duplicated key's value.  This will be a functional regression,
>
> Not if, as is normally the case, the transaction is retried from
> the beginning on a serialization failure.  Either the code will
> check for a duplicate (as in the case of the OP on this thread) and
> they won't see the error, *or* the the transaction which created
> the duplicate key will have committed before the start of the retry
> and you will get the duplicate key error.

I'm not buying that; that argument assumes duplicate key errors are
always 'upsert' driven.  Although OP's code may have checked for
duplicates it's perfectly reasonable (and in many cases preferable) to
force the transaction to fail and report the error directly back to
the application.  The application will then switch on the error code
and decide what to do: retry for deadlock/serialization or abort for
data integrity error.  IOW, the error handling semantics are
fundamentally different and should not be mixed.

merlin



pgsql-hackers by date:

Previous
From: Peter Geoghegan
Date:
Subject: Re: INSERT ... ON CONFLICT {UPDATE | IGNORE}
Next
From: Simon Riggs
Date:
Subject: Re: pgaudit - an auditing extension for PostgreSQL