On 12/29/14, 10:53 AM, Kevin Grittner wrote:
> Merlin Moncure <mmoncure@gmail.com> wrote:
>
>> Serialization errors only exist as a concession to concurrency
>> and performance. Again, they should be returned as sparsely as
>> possible because they provide absolutely (as Tom pointed
>> out) zero detail to the application.
>
> That is false. They provide an *extremely* valuable piece of
> information which is not currently available when you get a
> duplicate key error -- whether the error occurred because of a race
> condition and will not fail for the same cause if retried.
As for missing details like the duplicated key value, is there a reasonable way to add that as an errdetail() to a
serializationfailure error?
We do still have to be careful here though; you could still have code using our documented upsert methodology inside a
serializedtransaction. For example, via a 3rd party extension that can't assume you're using serialized transactions.
Wouldit still be OK to treat that as a serialization failure and bubble that all the way back to the application?
As part of this, we should probably modify our upsert example so it takes transaction_isolation into account...
> As for the fact that RI violations also don't return a
> serialization failure when caused by a race with concurrent
> transactions, I view that as another weakness in PostgreSQL. I
> don't think there is a problem curing one without curing the other
> at the same time. I have known of people writing their own
> triggers to enforce RI rather than defining FKs precisely so that
> they can get a serialization failure return code and do automatic
> retry if it is caused by a race condition. That's less practical
> to compensate for when it comes to unique indexes or constraints.
Wow, that's horrible. :(
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com