> The original scheduled downtime for one installation was 24 hours. By 21 hours it had not >completed the pg_dump schema-only so it was returned to operation.
To me, your best option is to create a slony cluster with the version you need to upgrade to.
When slony is in sync, simply make it the master and switch to it. It may take a while for
slony replication to be in sync, but when it is, there will be very little down time to switch
over.
On 4/7/19 12:05 PM, senor wrote:
> Thank you Adrian. I'm not sure if I can provide as much as you'd need for a definite answer but I'll give you what I have.
>
> The original scheduled downtime for one installation was 24 hours. By 21 hours it had not completed the pg_dump schema-only so it was returned to operation.
So this is more then one cluster?
I am assuming the below was repeated at different sites?
> The amount of data per table is widely varied. Some daily tables are 100-200GB and thousands of reports tables with stats are much smaller. I'm not connected to check now but I'd guess 1GB max. We chose to use the --link option partly because some servers do not have the disk space to copy. The time necessary to copy 1-2TB was also going to be an issue.
> The vast majority of activity is on current day inserts and stats reports of that data. All previous days and existing reports are read only.
> As is all too common, the DB usage grew with no redesign so it is a single database on a single machine with a single schema.
> I get the impression there may be an option of getting the schema dump while in service but possibly not in this scenario. Plan B is to drop a lot of tables and deal with imports later.
I take the above to mean that a lot of the tables are cruft, correct?
>
> I appreciate the help.
>
--
Adrian Klaver
adrian.klaver@aklaver.com
--
Melvin Davidson
Maj. Database & Exploration Specialist
Universe Exploration Command – UXC
Employment by invitation only!