Re: Serializable Isolation without blocking - Mailing list pgsql-hackers

From Greg Stark
Subject Re: Serializable Isolation without blocking
Date
Msg-id 407d949e0912310816n54593197g94547f7d74432012@mail.gmail.com
Whole thread Raw
In response to Re: Serializable Isolation without blocking  ("Kevin Grittner" <Kevin.Grittner@wicourts.gov>)
List pgsql-hackers
On Thu, Dec 31, 2009 at 3:11 PM, Kevin Grittner
<Kevin.Grittner@wicourts.gov> wrote:
> Yeah, that's why this is a two to four year project.  And I would
> point out that if there *wasn't* a performance hit in serializable
> mode, none of the other isolation levels would exist.  These less
> rigorous modes exist precisely because people are often willing to
> give up some data integrity guarantees, or solve them with more
> labor-intensive techniques, to gain performance.  I certainly
> wouldn't consider removing any of the existing transaction isolation
> levels or attempt to coerce anyone into using them against their will.
> ;-)

Hm, this raises the issue that you'll have to figure out what should
happen if two different transactions are using different isolation
modes. Currently our two isolation modes only control behaviour within
your transaction so they co-exist perfectly fine.

ISTM you would need to have transactions in read-comitted and snapshot
isolation modes recording what sets of records they read in order to
be able to guarantee serializable transactions to make any guarantees
as well.


Separately even if nobody on the system is using true serializable
isolation the on-disk data structures would need space to store the
additional information just in case anyone uses it. If that's a
substantial amount of space it might impact performance even if you
never use it.


> I am keeping track of the lists you're putting out there; they should
> be quite useful in the optimization phase.  I do intend to first get
> a patch which is "correct" in the sense of never allowing non-
> serializable behavior, but which contains some of the problems you
> list (although destroying modularity is obviously off the table even
> for that point), and then refine the granularity and performance to
> try to get within bounds which are acceptable for our use, and
> hopefully (eventually) the PostgreSQL community.

Yeah, I'm mostly concerned about the initial design question of where
to store all this extra information. It seems like once you've made
that decision most of the consequences will be pretty much set.

The two proposals on the table -- neither of which seem acceptable to me -- are:

1) Store information in indexes used in a scans indicating what scans
are in progress or have been done by the transaction. This means you
need some way to store the xid and the scan keys such that you can
guarantee any index maintenance will see it. You also have to be able
to handle arbitrary sets of xids similar to how we handle multi-xids
currently. Also you'll need some way to handle deletions which
currently don't do any index maintenance. This would have to be done
for every index type and would impose design constraints on the index
behaviours since in many cases it's not so easy to arrange things to
provide these guarantees. It also creates a big dependency on the
planner behaviour such that a change in plans will create user-visible
changes in the serialization failures that are possible. I fear it
would have terrible false-positive rates for some types of plans
possibly even being unable to ever complete some queries, notably
sequential table scans which would make people try to arrange for less
efficient plans because they're trying to avoid serialization
failures. It would also impose space costs on every index on disk.

2) Have some kind of in-memory data structure which stores the filter
conditions for different scans that are in progress or have been
executed by live transactions. This would have to be consulted by
updates to find conflicting filter conditions. This has the problem
that there's no efficient way to search for conflicting filter
conditions -- you would have to do a linear search across all filter
conditions on the same table and do fairly complex theorem-proving for
each one to prove there's no conflict. It would probably perform
terribly. It has the advantage of working regardless of the access
method and not touching index am code at all.


Both of these seem unacceptable to me but perhaps there are solutions
I'm not seeing. I wonder if some kind of hybrid approach is possible
but it seems like it would have the worst of both worlds in that case.

--
greg


pgsql-hackers by date:

Previous
From: Andrew Dunstan
Date:
Subject: Re: PATCH: Add hstore_to_json()
Next
From: Tom Lane
Date:
Subject: Re: IntArray in c.h