Andres Freund <andres@anarazel.de> writes:
> The reason for that is that the GlobalVisState stuff is computed
> heuristically (and then re-checked if that's not sufficient to prune a
> tuple, unless already done so). That's done so GetSnapshotData() doesn't
> have to look at each backends ->xmin, which is quite a massive speedup
> at higher connection counts, as each backends ->xmin changes much more
> often than each backend's xid.
OK.
> What do you exactly mean with the "going to huge effort to decide" bit?
I'd looked at all the complexity around GlobalVisState, but failed to
register that it should be pretty cheap on a per-tuple basis. So never
mind that complaint. The point that remains is just that it's different
from HeapTupleSatisfiesVacuum's rules.
>> I think to move forward, we need to figure out what the freezing
>> behavior ought to be for temp tables. We could make it the same
>> as it was before a7212be8b, which'd just require some more complexity
>> in vacuum_set_xid_limits. However, that negates the idea that we'd
>> like VACUUM's behavior on a temp table to be fully independent of
>> whether concurrent transactions exist. I'd prefer to allow a7212be8b's
>> behavior to stand, but then it seems we need to lobotomize the error
>> check in heap_prepare_freeze_tuple to some extent.
> I think that's an argument for what I suggested elsewhere, which is that
> we should move the logic for a different horizon for temp tables out of
> vacuum_set_xid_limits, and into procarray.
But procarray does not seem like a great place for
table-persistence-dependent decisions either?
>> Independently of that, it seems like we need to fix things so that
>> when pruneheap.c is called by vacuum, it makes EXACTLY the same
>> dead-or-not-dead decisions that the main vacuum code makes. This
>> business with applying some GlobalVisState rule or other instead
>> seems just as unsafe as can be.
> It's not great, I agree. Not sure there is a super nice answer
> though. Note that, even before my changes, vacuumlazy can decide
> differently than pruning whether a tuple is live. E.g. when an inserting
> transaction aborts. That's pretty much unavoidable as long as we have
> multiple HTSV calls for a tuple, since none of our locking can (nor
> should) prevent concurrent transactions from aborting.
It's clear that if the environment changes between test A and test B,
we might get different results. What I'm not happy about is that the
rules are different, so we might get different results even if the
environment did not change.
regards, tom lane