On 23/09/10 02:14, Kevin Grittner wrote:
> There is a rub on the other point, though. Without transaction
> information you have no way of telling whether TN committed before
> T0, so you would need to assume that it did. So on this count,
> there is bound to be some increase in false positives leading to
> transaction rollback. Without more study, and maybe some tests, I'm
> not sure how significant it is. (Actually, we might want to track
> commit sequence somehow, so we can determine this with greater
> accuracy.)
I'm confused. AFAICS there is no way to tell if TN committed before T0
in the current patch either.
> But wait, the bigger problems are yet to come.
>
> The other way we can detect conflicts is a read by a serializable
> transaction noticing that a different and overlapping serializable
> transaction wrote the tuple we're trying to read. How do you
> propose to know that the other transaction was serializable without
> keeping the SERIALIZABLEXACT information?
Hmm, I see. We could record which transactions were serializable in a
new clog-like structure that wouldn't exhaust shared memory.
> And how do you propose to record the conflict without it?
I thought you just abort the transaction that would cause the conflict
right there. The other transaction is committed already, so you can't do
anything about it anymore.
> Finally, this would preclude some optimizations which I *think* will
> pay off, which trade a few hundred kB more of shared memory, and
> some additional CPU to maintain more detailed conflict data, for a
> lower false positive rate -- meaning fewer transactions rolled back
> for hard-to-explain reasons. This more detailed information is also
> what seems to be desired by Dan S (on another thread) to be able to
> log the information needed to be able to reduce rollbacks.
Ok, I think I'm ready to hear about those optimizations now :-).
-- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com