Tom Lane wrote:
> *** /home/postgres/pgsql/src/test/isolation/expected/lock-update-delete_1.out Mon Feb 12 14:53:46 2018
> --- /home/postgres/pgsql/src/test/isolation/output_iso/results/lock-update-delete.out Wed Apr 18 11:30:23 2018
> ***************
> *** 150,156 ****
>
> t
> step s1l: <... completed>
> ! error in steps s2_unlock s1l: ERROR: could not serialize access due to concurrent update
>
> starting permutation: s2b s1l s2u s2_blocker1 s2r s2_unlock
> pg_advisory_lock
> --- 150,158 ----
>
> t
> step s1l: <... completed>
> ! key value
> !
> ! 1 1
>
> starting permutation: s2b s1l s2u s2_blocker1 s2r s2_unlock
> pg_advisory_lock
>
> It looks like maybe this one wasn't updated in 533e9c6b0 --- would
> you check/confirm that?
I think the new output is correct in REPEATABLE READ but it represents a
bug for the SERIALIZABLE mode.
The case is: a tuple is updated, with its key not modified; a concurrent
transaction is trying to read the tuple. The original expected output
says that the reading transaction is aborted, which matches what the
test comment says:
# When run in REPEATABLE READ or SERIALIZABLE transaction isolation levels, all
# permutations that commit s2 cause a serializability error; all permutations
# that rollback s2 can get through.
In REPEATABLE READ it seems fine to read the original version of the
tuple (returns 1) and not raise an error (the reading transaction will
simply see the value that was current when it started). But in
SERIALIZABLE mode, as far as I understand it, this case should raise a
serializability error.
I hope I'm wrong. Copying Kevin, just in case.
--
Álvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services