Re: pg_upgrade on high number tables database issues - Mailing list pgsql-hackers

From Jeff Janes
Subject Re: pg_upgrade on high number tables database issues
Date
Msg-id CAMkU=1yMnhpBpPE3__CFPz+pyEm5capExRxo=ncnq5Smj2T2Kg@mail.gmail.com
Whole thread Raw
In response to pg_upgrade on high number tables database issues  (Pavel Stehule <pavel.stehule@gmail.com>)
Responses Re: pg_upgrade on high number tables database issues  (Bruce Momjian <bruce@momjian.us>)
List pgsql-hackers
On Mon, Mar 10, 2014 at 6:58 AM, Pavel Stehule <pavel.stehule@gmail.com> wrote:
Hello

I had to migrate our databases from 9.1 to 9.2. We have high number of databases per cluster (more than 1000) and high number of tables (indexes) per database (sometimes more than 10K, exceptionally more than 100K).

I seen two problems:

a) too long files
pg_upgrade_dump_db.sql, pg_upgrade_dump_all.sql in postgres HOME directory. Is not possible to change a directory for these files.

Those files should go into whatever your current directory is when you execute pg_upgrade.  Why not just cd into whatever directory you want them to be in?

 
b) very slow first stage of upgrade - schema export is very slow without high IO or CPU utilization.

Just the pg_upgrade executable has low IO and CPU utilization, or the entire server does?

There were several bottlenecks in this area removed in 9.2 and 9.3.  Unfortunately the worst of those bottlenecks were in the server, so they depend on what database you are upgrading from, and so won't help you much upgrading from 9.1.

Cheers,

Jeff

pgsql-hackers by date:

Previous
From: Amit Kapila
Date:
Subject: Re: Retain dynamic shared memory segments for postmaster lifetime
Next
From: Robert Haas
Date:
Subject: Re: Retain dynamic shared memory segments for postmaster lifetime