dealing with file size when archiving databases - Mailing list pgsql-general

I've been backing up my databases by piping pg_dump into gzip and
burning the resulting files to a DVD-R.  Unfortunately, FreeBSD has
problems dealing with very large files (>1GB?) on DVD media.  One of my
compressed database backups is greater than 1GB; and the results of a
gzipped pg_dumpall is approximately 3.5GB.  The processes for creating
the iso image and burning the image to DVD-R finish without any
problems; but the resulting file is unreadable/unusable.

My proposed solution is to modify my python script to:

1. use pg_dump to dump each database's tables individually, including
both the database and table name in the file name;
3. use 'pg_dumpall -g' to dump the global information; and
4. burn the backup directories, files and a recovery script to DVD-R.

The script will pipe pg_dump into gzip to compress the files.

My questions are:

1. Will 'pg_dumpall -g' dump everything not dumped by pg_dump?  Will I
be missing anything?
2. Does anyone foresee any problems with the solution above?

Thanks,

Andrew Gould

pgsql-general by date:

Previous
From: "Jason Tesser"
Date:
Subject: problems with types after update to 8.0
Next
From: Michael Fuhr
Date:
Subject: Re: External (asynchronous) notifications of database updates