Re: postgres 9.3.6, serialize error with two independent, serially consistent transactions.. - Mailing list pgsql-bugs

From Kevin Grittner
Subject Re: postgres 9.3.6, serialize error with two independent, serially consistent transactions..
Date
Msg-id 153754345.412294.1426767755926.JavaMail.yahoo@mail.yahoo.com
Whole thread Raw
In response to postgres 9.3.6, serialize error with two independent, serially consistent transactions..  (Hursh Jain <hurshjain@beesell.com>)
Responses Re: postgres 9.3.6, serialize error with two independent, serially consistent transactions..  (Hursh Jain <hurshjain@beesell.com>)
List pgsql-bugs
Hursh Jain <hurshjain@beesell.com> wrote:

> In the example below, client #1 and client #2 are not doing
> anything that can possibly cause a inconsistent serial order of
> execution. You can pick and choose whichever one you want to run
> first, the final result will be 100% consistent and exactly the
> same, either way.
>
> So why does postgres 9.3.6 give an error to the second client ?
> Unless I am reading this docs wrong, this looks like a major bug.

This is not a bug.  What use of serializable transactions
guarantees is that the behavior of any set of successfully
committed serializable transactions is consistent with some serial
order of execution of those transactions.  If you retry the second
transaction (which got the serialization failure) it will succeed
and you have behavior consistent with either order of execution, so
the conditions are met.  There is no guarantee that there will be
no false positives (where a serialization failure is generated
where with infinite time and resources to analyze everything it
could have been avoided).

Note that if you add a primary key on the id column, and use values
that do not cause duplicate keys, you do not get any error.  Without
an index a full table scan is needed, which causes a predicate lock
which has a rw-conflict with any insert to the table.  Two
transactions doing this concurrently will cause a serialization
failure.  If we tracked enough information to avoid this case,
resource usage and performance would not be in a range most would
consider acceptable; heuristics are needed, which inevitably allows
false positives in some circumstances.

If you try your example on any database which uses strict two phase
locking (S2PL) to implement serializable transactions (e.g., MS SQL
Server, DB2, MySQL, Berkeley DB, etc.), I think you will find that
you also get a false positive serialization failure in the form of
a deadlock.  In databases which do not actually provide
serializable behavior when you ask for it (e.g., Oracle), you will
not get an error.  If you want that level of transaction isolation
in PostgreSQL you can request REPEATABLE READ, and you will
likewise not get an error.  For a description of the differences
between these isolation levels see the docs:

http://www.postgresql.org/docs/current/interactive/mvcc.html

For some examples of cases where you get different results between
these two isolation levels, see:

https://wiki.postgresql.org/wiki/SSI

PostgreSQL gives you a choice of which behavior you prefer.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

pgsql-bugs by date:

Previous
From: Michael Paquier
Date:
Subject: Re: Problem when installing PL/Proxy with Windows OS
Next
From: Michael Paquier
Date:
Subject: Re: Problem when installing PL/Proxy with Windows OS