Re: dealing with file size when archiving databases - Mailing list pgsql-general

From Tom Lane
Subject Re: dealing with file size when archiving databases
Date
Msg-id 19523.1119322410@sss.pgh.pa.us
Whole thread Raw
In response to dealing with file size when archiving databases  ("Andrew L. Gould" <algould@datawok.com>)
List pgsql-general
"Andrew L. Gould" <algould@datawok.com> writes:
> I've been backing up my databases by piping pg_dump into gzip and
> burning the resulting files to a DVD-R.  Unfortunately, FreeBSD has
> problems dealing with very large files (>1GB?) on DVD media.  One of my
> compressed database backups is greater than 1GB; and the results of a
> gzipped pg_dumpall is approximately 3.5GB.  The processes for creating
> the iso image and burning the image to DVD-R finish without any
> problems; but the resulting file is unreadable/unusable.

Yech.  However, I think you are reinventing the wheel in your proposed
solution.  Why not just use split(1) to divide the output of pg_dump or
pg_dumpall into slices that the DVD software won't choke on?  See
notes at
http://developer.postgresql.org/docs/postgres/backup.html#BACKUP-DUMP-LARGE

            regards, tom lane

pgsql-general by date:

Previous
From: Michael Fuhr
Date:
Subject: Re: External (asynchronous) notifications of database updates
Next
From: Alvaro Herrera
Date:
Subject: Re: dealing with file size when archiving databases