Upgrading Postgres large databases with blobs - Mailing list pgsql-general

From CAJ CAJ
Subject Upgrading Postgres large databases with blobs
Date
Msg-id 467669b30703101140l6c572699ocba6e830f1fc56c8@mail.gmail.com
Whole thread Raw
List pgsql-general
Hello,

For some reason, my first attempt to send this email to the list didn't get through ....

We have several independent database servers with ~50GB+ databases running postgres 8.0.x. We are planning to upgrade these databases to postgres 8.2.x over the weekend

We plan to use the following steps to upgrade each server,

1. Dump the 8.0.x database cluster using 8.2.x pg_dumpall
% ./pg_dumpall > pgdumpall_backup.sql

2.Dump the 8.0.x database  including large objects in  compressed custom format using 8.2.x pg_dump
% ./pg_dump -Fc -b -Z9 dbname > pgdump_lobs_backup


Restoring database
1. Initialize 8.2.x darabase
% initdb -D /data/pgdata

2. Restore template1 database from cluster dump
% ./psql -d template1 < pgdumpall_backup.sql

3. Delete database dbname else restoring will give error about existing dbname
% dropdb dbname

4. Create fresh dbname
% createdb -O dbowner dbname

5. Restore database with lobs
% ./pg_restore -v -Fc -d dbname -e -U dbowner < pgdumpall_lobs_backup

Some of the problems we have are,
1. We are not sure if all of the data will be available after dump/restore with above process
2. The dump and restore process is very very slow to be complete over the weekend (takes approx 1GB/hr to dump on a dual G5 PPC 2Ghz with 1GB RAM and RAID 1 disks)

What is the fastest way to upgrade postgres for large databases that has binary objects?

Thanks for all your help.

pgsql-general by date:

Previous
From: "David Legault"
Date:
Subject: pl/pgsql FOR LOOP with function
Next
From: Alvaro Herrera
Date:
Subject: Re: HIPPA (was Re: Anyone know ...)