On Thu, Dec 22, 2011 at 7:44 AM, Heikki Linnakangas
<heikki.linnakangas@enterprisedb.com> wrote:
> On 22.12.2011 01:43, Tom Lane wrote:
>>
>> A "utility to bump the page version" is equally a whole lot easier said
>> than done, given that the new version has more overhead space and thus
>> less payload space than the old. What does it do when the old page is
>> too full to be converted? "Move some data somewhere else" might be
>> workable for heap pages, but I'm less sanguine about rearranging indexes
>> like that. At the very least it would imply that the utility has full
>> knowledge about every index type in the system.
>
>
> Remembering back the old discussions, my favorite scheme was to have an
> online pre-upgrade utility that runs on the old cluster, moving things
> around so that there is enough spare room on every page. It would do normal
> heap updates to make room on heap pages (possibly causing transient
> serialization failures, like all updates do), and split index pages to make
> room on them. Yes, it would need to know about all index types. And it would
> set a global variable to indicate that X bytes must be kept free on all
> future updates, too.
>
> Once the pre-upgrade utility has scanned through the whole cluster, you can
> run pg_upgrade. After the upgrade, old page versions are converted to new
> format as pages are read in. The conversion is staightforward, as there the
> pre-upgrade utility ensured that there is enough spare room on every page.
That certainly works, but we're still faced with pg_upgrade rewriting
every page, which will take a significant amount of time and with no
backout plan or rollback facility. I don't like that at all, hence why
I think we need an online upgrade facility if we do have to alter page
headers.
--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services