Re: dealing with file size when archiving databases - Mailing list pgsql-general

From Vivek Khera
Subject Re: dealing with file size when archiving databases
Date
Msg-id EBC16E9E-8140-475B-8E50-2E928EDD6CA4@khera.org
Whole thread Raw
In response to dealing with file size when archiving databases  ("Andrew L. Gould" <algould@datawok.com>)
List pgsql-general
On Jun 20, 2005, at 10:28 PM, Andrew L. Gould wrote:

> compressed database backups is greater than 1GB; and the results of a
> gzipped pg_dumpall is approximately 3.5GB.  The processes for creating
> the iso image and burning the image to DVD-R finish without any
> problems; but the resulting file is unreadable/unusable.

I ran into this as well.  Apparently FreeBSD will not read a large
file on an ISO file system even though on a standard UFS or UFS2 fs
it will read files larger than you can make :-).

What I used to do was "split -b 1024m my.dump my.dump-split-" to
create multiple files and burn those to the DVD.  To restore, you
"cat my.dump.split.?? | pg_restore" with appropriate options to
pg_restore.

My ultimate fix was to start burning and reading the DVD's on my
MacOS desktop instead, which can read/write these large files just
fine :-)


Vivek Khera, Ph.D.
+1-301-869-4449 x806



pgsql-general by date:

Previous
From: "FERREIRA, William (COFRAMI)"
Date:
Subject: Re: compilation postgresql/solaris error
Next
From: "Prasad Duggineni"
Date:
Subject: Re: 8.03 postgres install error