LARGE db dump/restore for upgrade question - Mailing list pgsql-general

From Philip Crotwell
Subject LARGE db dump/restore for upgrade question
Date
Msg-id Pine.GSO.4.10.10108141129130.12211-100000@tigger.seis.sc.edu
Whole thread Raw
Responses RE: LARGE db dump/restore for upgrade question  ("Andrew Snow" <andrew@modulus.org>)
Re: LARGE db dump/restore for upgrade question  (Micah Yoder <yodermk@home.com>)
List pgsql-general
Hi

I have a very large database of seismic data. It is about 27 Gb now, and
growing at about the rate of 1 Gb every 3-4 days. I am running
postgres 7.1.2. I might possibly try to upgrade to 7.2 when it comes out,
but I don't know if it will be possible for me to do 7.3 due to
the pg_dump/pg_restore problem. In a little over a year the database will
probably pass the halfway point on my raid and so it will physically be
impossible to upgrade. Most of the space is probably taken up by large
objects, which I am hoping will make a solution at least a little bit
easier.

I am trying a pg_dump right now, and in the first 25 minutes it dumped
54Mb, which means that a full dump will take about 200 hours! I would
guess the restore would take about the same amount of time, so I would be
looking at 17 DAYS of downtime to upgrade! Maybe it will speed up later in
the dump, I don't know. And in about 3 months or so it will take me twice
that amout of time. Also, this is on a 4 processor sun E450 with a A1000
hardware raid, so it not that I am using old slow hardware. Just for
comparison, a file system dump to tape took 6 hours, and that was back
when I only had a software raid!

So, my question is, is it likely that one day postgres will no longer
require dump/restores for upgrades? I would assume that there will always
be a need to tweak the internal layout of files, but I wonder if there
isn't a way to do this "in place" or at least to allow a file system move
of the large objects without requiring them to be dumped as well?
Even better would be if each new version of postgres could read the
immediatly previous version tables, and could convert them in the
background. Maybe just dreaming here. :)

Could something related to making the upgrade less painful for very large
databases be added to the ToDo list even if it isn't a high priority?

Not that I am complaining, postgres seems to handle this data volume quite
well, and it is certainly worth very dollar I didn't pay for it. :)

Any suggestion on how to prepare for the next upgrade would be
appreciated.

thanks,
Philip



pgsql-general by date:

Previous
From: Tom Lane
Date:
Subject: Re: Is the bug system active?
Next
From: "Paul Grenda"
Date:
Subject: Ugrading to 7.1 from 7.1