Re: dealing with file size when archiving databases - Mailing list pgsql-general

From Andrew L. Gould
Subject Re: dealing with file size when archiving databases
Date
Msg-id 200506202244.57084.algould@datawok.com
Whole thread Raw
In response to dealing with file size when archiving databases  ("Andrew L. Gould" <algould@datawok.com>)
List pgsql-general
On Monday 20 June 2005 09:53 pm, Tom Lane wrote:
> "Andrew L. Gould" <algould@datawok.com> writes:
> > I've been backing up my databases by piping pg_dump into gzip and
> > burning the resulting files to a DVD-R.  Unfortunately, FreeBSD has
> > problems dealing with very large files (>1GB?) on DVD media.  One
> > of my compressed database backups is greater than 1GB; and the
> > results of a gzipped pg_dumpall is approximately 3.5GB.  The
> > processes for creating the iso image and burning the image to DVD-R
> > finish without any problems; but the resulting file is
> > unreadable/unusable.
>
> Yech.  However, I think you are reinventing the wheel in your
> proposed solution.  Why not just use split(1) to divide the output of
> pg_dump or pg_dumpall into slices that the DVD software won't choke
> on?  See notes at
> http://developer.postgresql.org/docs/postgres/backup.html#BACKUP-DUMP
>-LARGE
>
>             regards, tom lane

Thanks, Tom!  The split option also fixes the problem; whereas my
"solution", only delays the problem until a table gets too large.  Of
course, at that point, I should probably use something other than
DVD's.

Andrew Gould

pgsql-general by date:

Previous
From: William Yu
Date:
Subject: Re: PostgreSQL Developer Network
Next
From: "Andrew L. Gould"
Date:
Subject: Re: dealing with file size when archiving databases