On 22.12.2011 01:43, Tom Lane wrote:
> A "utility to bump the page version" is equally a whole lot easier said
> than done, given that the new version has more overhead space and thus
> less payload space than the old. What does it do when the old page is
> too full to be converted? "Move some data somewhere else" might be
> workable for heap pages, but I'm less sanguine about rearranging indexes
> like that. At the very least it would imply that the utility has full
> knowledge about every index type in the system.
Remembering back the old discussions, my favorite scheme was to have an
online pre-upgrade utility that runs on the old cluster, moving things
around so that there is enough spare room on every page. It would do
normal heap updates to make room on heap pages (possibly causing
transient serialization failures, like all updates do), and split index
pages to make room on them. Yes, it would need to know about all index
types. And it would set a global variable to indicate that X bytes must
be kept free on all future updates, too.
Once the pre-upgrade utility has scanned through the whole cluster, you
can run pg_upgrade. After the upgrade, old page versions are converted
to new format as pages are read in. The conversion is staightforward, as
there the pre-upgrade utility ensured that there is enough spare room on
every page.
-- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com