Re: pg_upgrade --jobs - Mailing list pgsql-general

From Adrian Klaver
Subject Re: pg_upgrade --jobs
Date
Msg-id 6ea43675-dd00-cd26-11a8-0444f575f6c9@aklaver.com
Whole thread Raw
In response to Re: pg_upgrade --jobs  (senor <frio_cervesa@hotmail.com>)
Responses Re: pg_upgrade --jobs
List pgsql-general
On 4/7/19 12:05 PM, senor wrote:
> Thank you Adrian. I'm not sure if I can provide as much as you'd need for a definite answer but I'll give you what I
have.
> 
> The original scheduled downtime for one installation was 24 hours. By 21 hours it had not completed the pg_dump
schema-onlyso it was returned to operation.
 

So this is more then one cluster?

I am assuming the below was repeated at different sites?

> The amount of data per table is widely varied. Some daily tables are 100-200GB and thousands of reports tables with
statsare much smaller. I'm not connected to check now but I'd guess 1GB max. We chose to use the --link option partly
becausesome servers do not have the disk space to copy. The time necessary to copy 1-2TB was also going to be an
issue.
> The vast majority of activity is on current day inserts and stats reports of that data. All previous days and
existingreports are read only.
 
> As is all too common, the DB usage grew with no redesign so it is a single database on a single machine with a single
schema.
> I get the impression there may be an option of getting the schema dump while in service but possibly not in this
scenario.Plan B is to drop a lot of tables and deal with imports later.
 

I take the above to mean that a lot of the tables are cruft, correct?

> 
> I appreciate the help.
> 


-- 
Adrian Klaver
adrian.klaver@aklaver.com



pgsql-general by date:

Previous
From: Andres Freund
Date:
Subject: Re: assembling PGresults from multiple simultaneous queries (libpq,singlerowmode)
Next
From: Melvin Davidson
Date:
Subject: Re: pg_upgrade --jobs