Speeding up pg_upgrade - Mailing list pgsql-hackers

From Bruce Momjian
Subject Speeding up pg_upgrade
Date
Msg-id 20171205140135.GA25023@momjian.us
Whole thread Raw
Responses Re: Speeding up pg_upgrade
Re: Speeding up pg_upgrade
Re: Speeding up pg_upgrade
List pgsql-hackers
As part of PGConf.Asia 2017 in Tokyo, we had an unconference topic about
zero-downtime upgrades.  After the usual discussion of using logical
replication, Slony, and perhaps having the server be able to read old
and new system catalogs, we discussed speeding up pg_upgrade.

There are clusters that take a long time to dump the schema from the old
cluster and recreate it in the new cluster.  One idea of speeding up
pg_upgrade would be to allow pg_upgrade to be run in two stages:

1.  prevent system catalog changes while the old cluster is running, and
dump the old cluster's schema and restore it in the new cluster

2.  shut down the old cluster and copy/link the data files

My question is whether the schema dump/restore is time-consuming enough
to warrant this optional more complex API, and whether people would
support adding a server setting that prevented all system table changes?

-- 
  Bruce Momjian  <bruce@momjian.us>        http://momjian.us
  EnterpriseDB                             http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +


pgsql-hackers by date:

Previous
From: Rafia Sabih
Date:
Subject: Re: [HACKERS] [POC] Faster processing at Gather node
Next
From: Stephen Frost
Date:
Subject: Re: Speeding up pg_upgrade