2010/5/25 Dan Ports <drkp@csail.mit.edu>:
> On Mon, May 24, 2010 at 10:24:07AM -0500, Kevin Grittner wrote:
>
>> Replicating or recreating the whole predicate locking and conflict
>> detection on slaves is not feasible for performance reasons. (I
>> won't elaborate unless someone feels that's not intuitively
>> obvious.) The only sane way I can see to have a slave database allow
>> serializable behavior is to WAL-log the acquisition of a snapshot by
>> a serializable transaction, and the rollback or commit, on the
>> master, and to have the serializable snapshot build on a slave
>> exclude any serializable transactions for which there are still
>> concurrent serializable transactions. Yes, that does mean WAL-
>> logging the snapshot acquisition even if the transaction doesn't yet
>> have an xid, and WAL-logging the commit or rollback even if it never
>> acquires an xid.
>
> One important observation is that any anomaly that occurs on the slave
> can be resolved by aborting a local read-only transaction. This is a
> good thing, because the alternatives are too horrible to consider.
>
> You could possibly cut the costs of predicate locking by having the
> master ship with each transaction the list of predicate locks it
> acquired. But you'd still have to track locks for read-only
> transactions, so maybe that's not a significant cost improvement. On
> the other hand, if you're willing to pay the price of serializability
> on the master, why not the slaves too?
I don't understand the problem. According to me, in the context of
SSI, a read-only slave can just map SERIALIZABLE to the technical
implementation of REPEATABLE READ (i.e., the currently-existing
"SERIALIZABLE"). The union of the transactions on the master and the
slave(s) will still exhibit SERIALIZABLE behavior because the
transactions on the slave cannot write anything and are therefore
irrelevant.
Is anything wrong with that reasoning?
Nicolas