Re: BUG #11919: Serializable isolation issue: one session seeing writes from another session. - Mailing list pgsql-bugs

From Kevin Grittner
Subject Re: BUG #11919: Serializable isolation issue: one session seeing writes from another session.
Date
Msg-id 1415670938.45502.YahooMailNeo@web122304.mail.ne1.yahoo.com
Whole thread Raw
In response to BUG #11919: Serializable isolation issue: one session seeing writes from another session.  (gavin.panella@canonical.com)
Responses Re: BUG #11919: Serializable isolation issue: one session seeing writes from another session.  (David G Johnston <david.g.johnston@gmail.com>)
List pgsql-bugs
"gavin.panella@canonical.com" <gavin.panella@canonical.com> wrote:

> I think I might have found a bug in PostgreSQL 9.3.5 relating to
> serializable isolation, where code running within a savepoint can see
> data committed in a second session after the commencement of the first,
> but that data then "disappears" when the savepoint is rolled-back.

> 1. Open two psql sessions to the same database.
>
> 2. Create an example table:
>
>     create table things (a int unique);
>
> 3. In the first session:
>
>     begin isolation level serializable;
>     insert into things (a) values (1);
>
>   Don't commit yet.
>
> 4. In the second session:
>
>     begin isolation level serializable;
>     savepoint one;
>     insert into things (a) values (1);
>
>   This should hang.
>
> 5. In the first session:
>
>     commit;
>
> 6. Go back to the second session. It will have failed with:
>
>     ERROR:  duplicate key value violates unique constraint "things_a_key"
>     DETAIL:  Key (a)=(1) already exists.
>
> 7. Continue in the second session:
>
>     rollback to savepoint one;
>     select * from things;
>
>   You should see:
>
>       a
>     ---
>     (0 rows)

Interesting.  I see your point, yet I'm not sure what other
behavior we could reasonably provide.  Once the snapshot is set for
the transaction, we can't use a different one which will show more
recent activity, so that SELECT at the end *has* to return what it
does.  That leaves the question of whether the enforcement of the
unique constraint within the savepoint should behave differently.
Of course, you can make it do so by declaring it to be DEFERRED.
There are several other weird things that can happen when they
aren't deferred, for example:

test=# create table things (a int unique);
CREATE TABLE
test=# insert into things values (1),(2);
INSERT 0 2
test=# update things set a = a + 1;
ERROR:  duplicate key value violates unique constraint "things_a_key"
DETAIL:  Key (a)=(2) already exists.

So I'm not too concerned that this is another thing that doesn't
really behave in an ACID manner if the constraint is not declared
as DEFERRABLE.  If you want truly logical behavior you should
always declare constraints that way and just put up with the
performance hit that brings.  Try playing around with your test
case if the table is defined this way:

create table things (a int unique deferrable initially deferred);

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

pgsql-bugs by date:

Previous
From: John R Pierce
Date:
Subject: Re: BUG #11910: Database connection exhaustion
Next
From: Bruce Momjian
Date:
Subject: Re: BUG #11883: Year 1500 not treated as leap year when it was a leap year