Re: pg15b2: large objects lost on upgrade - Mailing list pgsql-hackers

From Robert Haas
Subject Re: pg15b2: large objects lost on upgrade
Date
Msg-id CA+TgmobVx8h=1MbOPTd8ctXUG1kYUTsSidPpNrmEJbqOS+98UQ@mail.gmail.com
Whole thread Raw
In response to Re: pg15b2: large objects lost on upgrade  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: pg15b2: large objects lost on upgrade
Re: pg15b2: large objects lost on upgrade
Re: pg15b2: large objects lost on upgrade
List pgsql-hackers
On Thu, Aug 4, 2022 at 10:26 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Robert Haas <robertmhaas@gmail.com> writes:
> > 100 << 2^32, so it's not terrible, but I'm honestly coming around to
> > the view that we ought to just nuke this test case.
>
> I'd hesitated to suggest that, but I think that's a fine solution.
> Especially since we can always put it back in later if we think
> of a more robust way.

IMHO it's 100% clear how to make it robust. If you want to check that
two values are the same, you can't let one of them be overwritten by
an unrelated event in the middle of the check. There are many specific
things we could do here, a few of which I proposed in my previous
email, but they all boil down to "don't let autovacuum screw up the
results".

But if you don't want to do that, and you also don't want to have
random failures, the only alternatives are weakening the check and
removing the test. It's kind of hard to say which is better, but I'm
inclined to think that if we just weaken the test we're going to think
we've got coverage for this kind of problem when we really don't.

-- 
Robert Haas
EDB: http://www.enterprisedb.com



pgsql-hackers by date:

Previous
From: Andrey Borodin
Date:
Subject: Re: Use fadvise in wal replay
Next
From: Andres Freund
Date:
Subject: Re: pg15b2: large objects lost on upgrade