Peter Geoghegan <pg@bowt.ie> writes:
> On Mon, Apr 11, 2022 at 8:55 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> In principle, this is showing an actual bug, because once we drop
>> the buffer pin somebody could replace the page before we get done
>> examining the tuple. I'm not sure what the odds are of that happening
>> in the field, but they're probably mighty low because a just-accessed
>> buffer should not be high priority for replacement.
> I imagine that the greater risk comes from concurrent opportunistic
> pruning.
Good point. I'm afraid that means we need a back-branch fix, which
I guess requires an alternate entry point.
> The other backend's page defragmentation step (from pruning)
> would render our backend's HeapTuple pointer invalid. Presumably it
> would just look like an invalid/non-matching xmin in our backend, at
> the point of control flow that Valgrind complains about
> (heapam_handler.c:509).
Right, but there are other accesses below, and in any case match
failure isn't necessarily the right thing. What that code is
trying to do is chain up to the latest version of the tuple, and
the likely end result would be to incorrectly conclude that there
isn't one, resulting in failure to update a tuple that should
have been updated.
regards, tom lane