Re: WIP: Detecting SSI conflicts before reporting constraint violations - Mailing list pgsql-hackers

From Thomas Munro
Subject Re: WIP: Detecting SSI conflicts before reporting constraint violations
Date
Msg-id CAEepm=2_9PxSqnjp=8uo1XthkDVyOU9SO3+OLAgo6LASpAd5Bw@mail.gmail.com
Whole thread Raw
In response to Re: WIP: Detecting SSI conflicts before reporting constraint violations  (Robert Haas <robertmhaas@gmail.com>)
Responses Re: WIP: Detecting SSI conflicts before reporting constraint violations
Re: WIP: Detecting SSI conflicts before reporting constraint violations
List pgsql-hackers
On Thu, Feb 4, 2016 at 7:48 AM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Sun, Jan 31, 2016 at 5:19 PM, Thomas Munro
> <thomas.munro@enterprisedb.com> wrote:
>> As described in a recent Reddit discussion[1] and bug report 9301[2],
>> there are scenarios where overlapping concurrent read-write sequences
>> produce serialization failures without constraints, but produce
>> constraint violations when there is a unique constraint.  A simple
>> example is deciding on a new value for a primary key by first checking
>> the existing contents of a table.
>>
>> This makes life difficult if you're trying to build systems that
>> automatically retry SERIALIZABLE transactions where appropriate,
>> because you have to decide how and when to handle unique constraint
>> violations too.  For example, people have experimented with automatic
>> retry-on-40001 at the level of HTTP requests for Django applications
>> when using the middleware that maps HTTP requests to database
>> transactions, and the same opportunities presumably exist in Java
>> application servers and other web service technologies, but unique
>> constraint violations get in the way of that.
>>
>> Here is an experimental patch to report such SSI conflicts.  I had to
>> make a change to aminsert_function so that the snapshot could be
>> available to btree insert code (which may be unpopular).  The
>> questions on my mind now are:  Are there still conflicting schedules
>> that would be missed, or significant new spurious conflict reports, or
>> other theoretical soundness problems?  Is that PredicateLockPage call
>> sound and cheap enough?  It is the only new code that isn't in a path
>> already doomed to ereport, other than the extra snapshot propagation,
>> and without it read-write-unique-3.spec (taken from bug report 9301)
>> doesn't detect the conflict.
>>
>> Thoughts?
>
> I don't feel qualified to have an opinion on whether this is an
> improvement.  I'm a little skeptical of changes like this on general
> principle because sometimes one clientele wants error A to be reported
> rather than error B and some other clientele wants the reverse.
> Deciding who is right is above my pay grade.

I don't see it as a difficult choice between two reasonable
alternatives.  It quacks suspiciously like a bug.

The theoretical problem with the current behaviour is that by
reporting a unique constraint violation in this case, we reach an
outcome that is neither a serialization failure nor a result that
could occur in any serial ordering of transactions.  The overlapping
transactions both observed that the key they planned to insert was not
present before inserting, and therefore they can't be untangled: there
is no serial order of the transactions where the second transaction to
run wouldn't see the key already inserted by the first transaction and
(presumably) take a different course of action.  (If it *does* see the
value already present in its snapshot, or doesn't even look first
before inserting and it turns out to be present, then it really
*should* get a unique constraint violation when trying to insert.)

The practical problem with the current behaviour is that the user has
to work out whether a unique constraint violation indicates:

1.  A programming error -- something is wrong that retrying probably won't fix

2.  An unfavourable outcome in the case that you speculatively
inserted without checking whether the value was present so you were
expecting a violation to be possible, in which case you know what
you're doing and you can figure out what to do next, probably retry or
give up

3.  A serialization failure that has been masked because our coding
happens to check for unique constraint violations without considering
SSI conflicts first -- retrying will eventually succeed.

It's complicated for a human to work out how to distinguish the third
category errors in each place where they might occur (and even to know
that they are possible, since AFAIK the manual doesn't point it out),
and impossible for an automatic retry-on-40001 framework to handle in
general.  SERIALIZABLE is supposed to be super easy to use (and
devilishly hard to implement...).

Here's an example.  Suppose you live in a country that requires
invoices to be numbered without gaps starting at one for each calendar
year.  You write:

  BEGIN ISOLATION LEVEL SERIALIZABLE;
  SELECT COALESCE(MAX(invoice_number) + 1, 1) FROM invoice WHERE year = 2016;
  INSERT INTO invoice VALUES (2016, $1, ...); -- using value computed above
  COMMIT;

I think it's pretty reasonable to expect that to either succeed or
ereport SQLSTATE 40001, and I think it's pretty reasonable to want to
be able to give the code that runs that transaction to a mechanism
that will automatically retry the whole thing if 40001 is reported.
Otherwise the promise of SERIALIZABLE is thwarted: you should be able
to forget about concurrency and write simple queries that assume they
are the only transaction in the universe.  The activities of other
overlapping transactions shouldn't change the outcome of your
transaction, other than potentially triggering 40001.

If you really want unique constraint violations for this case, then
with this patch you can remove the SELECT and get the UCV, or use a
lower isolation level since you are in fact depending on a hole in the
isolation between transactions.  Or you could consider the new SSI
conflict to be spurious (though it isn't!) and just retry, which
doesn't seem to have a downside since you already have to handle
those.

I added the (German?) invoice numbering example above as
specs/read-write-unique-4.spec in the attached new patch, with three
interesting permutations.  I am not sure what to think about the third
permutation yet -- an overlap between two transactions where one of
them just writes the key, and the other reads then writes -- the
outcome is possibly wrong there and I don't know off the top of my
head how to fix it if it is.  My thinking was that this case should be
covered by the new predicate lock I introduced, because even though s1
didn't explicitly SELECT anything, by successfully passing the unique
check phase it really has read something, so it's a transaction that
has read (the absence of the value) and written (the value).  This is
the part of the problem that I'm least sure of, but am hoping to work
out with some SSI expert help.

--
Thomas Munro
http://www.enterprisedb.com

Attachment

pgsql-hackers by date:

Previous
From: "Joshua D. Drake"
Date:
Subject: Re: Idle In Transaction Session Timeout, revived
Next
From: Tom Lane
Date:
Subject: Re: Idle In Transaction Session Timeout, revived