Re: LARGE db dump/restore for upgrade question - Mailing list pgsql-general

From Micah Yoder
Subject Re: LARGE db dump/restore for upgrade question
Date
Msg-id 01081702530808.01102@eclipse
Whole thread Raw
In response to LARGE db dump/restore for upgrade question  (Philip Crotwell <crotwell@seis.sc.edu>)
List pgsql-general
> I am trying a pg_dump right now, and in the first 25 minutes it dumped
> 54Mb, which means that a full dump will take about 200 hours! I would
> guess the restore would take about the same amount of time, so I would be
> looking at 17 DAYS of downtime to upgrade! Maybe it will speed up later in

Thinking out loud here ...

Perhaps a slightly modified (or even custom written) pg_dump that dumps your
large tables incrementally.  Since you said that the large tables only have
inserts, you should be able to run most of this while the DB is up.  Have it
keep track of the timestamp of the last row dumped for each table.  Then it
could continue from there, appending to the same file, next time you run it.
If you keep the dump, then you won't have to do another large pg_dump next
time you upgrade, assuming you never do updates on those tables.

Of course, you'll still have to wait a while for it to restore after every
upgrade.  The other suggestion about fast hardware should help there.

If this doesn't make any sense, sorry, it's 3 in the morning....

> Not that I am complaining, postgres seems to handle this data volume quite
> well, and it is certainly worth very dollar I didn't pay for it. :)

What a ringing endorsement....

--
Like to travel?                        http://TravTalk.org
Micah Yoder Internet Development       http://yoderdev.com


pgsql-general by date:

Previous
From: Feite Brekeveld
Date:
Subject: slow update but have an index
Next
From: Martijn van Oosterhout
Date:
Subject: Re: slow update but have an index