Re: Pg_upgrade speed for many tables - Mailing list pgsql-hackers

From Bruce Momjian
Subject Re: Pg_upgrade speed for many tables
Date
Msg-id 20121106193726.GA21594@momjian.us
Whole thread Raw
In response to Pg_upgrade speed for many tables  (Bruce Momjian <bruce@momjian.us>)
List pgsql-hackers
On Mon, Nov  5, 2012 at 03:08:17PM -0500, Bruce Momjian wrote:
> Magnus reported that a customer with a million tables was finding
> pg_upgrade slow.  I had never considered many table to be a problem, but
> decided to test it.  I created a database with 2k tables like this:
>
>     CREATE TABLE test1990 (x SERIAL);
>
> Running the git version of pg_upgrade on that took 203 seconds.  Using
> synchronous_commit=off dropped the time to 78 seconds.  This was tested
> on magnetic disks with a write-through cache.  (No change on an SSD with
> a super-capacitor.)
>
> I don't see anything unsafe about having pg_upgrade use
> synchronous_commit=off.  I could set it just for the pg_dump reload, but
> it seems safe to just use it always.  We don't write to the old cluster,
> and if pg_upgrade fails, you have to re-initdb the new cluster anyway.
>
> Patch attached.  I think it should be applied to 9.2 as well.

Modified patch attached and applied to head and 9.2.  I decided to use
synchronous_commit=off only on the new cluster, just in case we ever do
make a modification of the old cluster.

--
  Bruce Momjian  <bruce@momjian.us>        http://momjian.us
  EnterpriseDB                             http://enterprisedb.com

  + It's impossible for everything to be true. +

Attachment

pgsql-hackers by date:

Previous
From: Robert Haas
Date:
Subject: Re: the number of pending entries in GIN index with FASTUPDATE=on
Next
From: Robert Haas
Date:
Subject: Re: Doc patch, distinguish sections with an empty row in error code table