Re: Page-level version upgrade (was: Block-level CRC checks) - Mailing list pgsql-hackers

From Greg Smith
Subject Re: Page-level version upgrade (was: Block-level CRC checks)
Date
Msg-id 4B16AD1C.8000604@2ndquadrant.com
Whole thread Raw
In response to Re: Page-level version upgrade (was: Block-level CRC checks)  (Robert Haas <robertmhaas@gmail.com>)
Responses Re: Page-level version upgrade (was: Block-level CRC checks)
List pgsql-hackers
Robert Haas wrote:
> The problem I'm referring to is that there is no guarantee that you
> would be able predict how much space to reserve.  In a case like CRCs,
> it may be as simple as "4 bytes".  But what if, say, we switch to a
> different compression algorithm for inline toast?
Upthread, you made a perfectly sensible suggestion:  use the CRC 
addition as a test case to confirm you can build something useful that 
allowed slightly more complicated in-place upgrades than are supported 
now.  This requires some new code to do tuple shuffling, communicate 
reserved space, etc.  All things that seem quite sensible to have 
available, useful steps toward a more comprehensive solution, and an 
achievable goal you wouldn't even have to argue about.

Now, you're wandering us back down the path where we have to solve a 
"migrate TOAST changes" level problem in order to make progress.  
Starting with presuming you have to solve the hardest possible issue 
around is the documented path to failure here.  We've seen multiple such 
solutions before, and they all had trade-offs deemed unacceptable:  
either a performance loss for everyone (not just people upgrading), or 
unbearable code complexity.  There's every reason to believe your 
reinvention of the same techniques will suffer the same fate.

When someone has such a change to be made, maybe you could bring this 
back up again and gain some traction.  One of the big lessons I took 
from the 8.4 development's lack of progress on this class of problem:  
no work to make upgrades easier will get accepted unless there is such 
an upgrade on the table that requires it.  You need a test case to make 
sure the upgrade approach a) works as expected, and b) is code you must 
commit now or in-place upgrade is lost.  Anything else will be deferred; 
I don't think there's any interest in solving a speculative future 
problem left at this point, given that it will be code we can't even 
prove will work.

> Another problem with a pre-upgrade utility is - how do you verify,
> when you fire up the new cluster, that the pre-upgrade utility has
> done its thing?
Some additional catalog support was suggested to mark what the 
pre-upgrade utility had processed.   I'm sure I could find the messages 
about again if I had to.

> If all the logic is in the new server, you may still be in hot water
> when you discover that it can't deal with a particular case.
If you can't design a pre-upgrade script without showstopper bugs, what 
makes you think the much more complicated code in the new server (which 
will be carrying around an ugly mess of old and new engine parts) will 
work as advertised?  I think we'll be lucky to get the simplest possible 
scheme implemented, and that any of these more complicated ones will die 
under their own weight of their complexity.

Also, your logic seems to presume that no backports are possible to the 
old server.  A bug-fix to the pre-upgrade script is a completely 
reasonable and expected candidate for backporting, because it will be 
such a targeted  piece of code that adjusting it shouldn't impact 
anything else.  The same will not be even remotely true if there's a bug 
fix needed in a more complicated system that lives in a regularly 
traversed code path.  Having such a tightly targeted chunk of code makes 
pre-upgrade *more* likely to get bug-fix backports, because you won't be 
touching code executed by regular users at all.

The potential code impact of backporting fixes to the more complicated 
approaches here is another major obstacle to adopting one of them.  
That's an issue that we didn't even get to the last time, because 
showstopper issues popped up first.  That problem was looming had work 
continued down that path though.

-- 
Greg Smith    2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
greg@2ndQuadrant.com  www.2ndQuadrant.com



pgsql-hackers by date:

Previous
From: Robert Haas
Date:
Subject: Re: [CORE] EOL for 7.4?
Next
From: Heikki Linnakangas
Date:
Subject: Re: Hot Standby remaining issues