Re: Serializable Isolation without blocking - Mailing list pgsql-hackers

From Kevin Grittner
Subject Re: Serializable Isolation without blocking
Date
Msg-id 4A04001D.EE98.0025.0@wicourts.gov
Whole thread Raw
In response to Re: Serializable Isolation without blocking  (Greg Stark <stark@enterprisedb.com>)
Responses Re: Serializable Isolation without blocking  (Greg Stark <stark@enterprisedb.com>)
Re: Serializable Isolation without blocking  ("Albe Laurenz" <laurenz.albe@wien.gv.at>)
List pgsql-hackers
Greg Stark <stark@enterprisedb.com> wrote: 
> Well I don't understand what storing locks in an index can
> accomplish if other queries might use other indexes or sequential
> scans to access the records and never see those locks.
> 
> Or does this method only require that writers discover the locks and
> therefore only writers can ever fail due to serialization failures
> they cause?
Well, readers don't need to find the SIREAD locks which readers set. 
Conflicts between writers are handled the same as current PostgreSQL
techniques.  Readers need to look for write locks, and writers need to
look for SIREAD locks.  Neither is blocked by the other, but finding a
conflict sets both transactions with a directional "edge" boolean
flag.  (So we would need to track two booleans per transaction in
addition to the new SIREAD locks.)  When a transaction reaches a state
where both "edge" booleans are set, one of the two transactions
involved in setting that must be rolled back.
The prototype implementation in the Berkeley DB preferred to roll back
a "pivot" transaction (one with both edges set) where possible, so the
failure would probably usually be on a transaction which modified
data, but not necessarily -- if the writers involved have committed
and the reader transaction might see an invalid database state, the
reader would be rolled back.
> I still haven't actually read the paper so I should probably bow out
> from the conversation until I do.  I was apparently already under
> one misapprehension as Laurenz just claimed the paper does not show
> how to prevent "phantoms" (phantom reads I assume?). Perhaps it's
> not as ambitious as achieving true serializability after all?
It does achieve true serializability in terms of the definitions I've
read, although I've discovered at least one way in which its
guarantees aren't as strong as traditional blocking techniques -- it
doesn't guarantee that transactions at a level less strict than
serializable will see a state which would exist between some serial
execution of serializable transactions which modify the data, as the
blocking schemes do.  As I said in an earlier post, I'm OK with that,
personally.  We should probably document it as a difference, to alert
someone converting, but the standard doesn't seem to require the
behavior that traditional blocking approaches provide on this point.
-Kevin


pgsql-hackers by date:

Previous
From: Heikki Linnakangas
Date:
Subject: Re: cs_CZ vs regression tests, part N+1
Next
From: Greg Stark
Date:
Subject: Re: Serializable Isolation without blocking