Re: Problem w/ dumping huge table and no disk space - Mailing list pgsql-general

From Joe Conway
Subject Re: Problem w/ dumping huge table and no disk space
Date
Msg-id 010201c137eb$b32d0980$0705a8c0@jecw2k1
Whole thread Raw
In response to Re: Problem w/ dumping huge table and no disk space  (Andrew Gould <andrewgould@yahoo.com>)
List pgsql-general
> Have you tried dumping individual tables separately
> until it's all done?
>
> I've never used to -Z option, so I can't compare its
> compression to piping a pg_dump through gzip.
> However, this is how I've been doing it:
>
> pg_dump db_name | gzip -c > db_name.gz
>
> I have a 2.2 Gb database that gets dumped/compressed
> to a 235 Mb file.
>
> Andrew

Another idea which you might try is run pg_dumpall from a different host
(with ample space) using the -h and -U options.

HTH,

Joe

Usage:
  pg_dumpall [ options... ]

Options:
  -c, --clean            Clean (drop) schema prior to create
  -g, --globals-only     Only dump global objects, no databases
  -h, --host=HOSTNAME    Server host name
  -p, --port=PORT        Server port number
  -U, --username=NAME    Connect as specified database user
  -W, --password         Force password prompts (should happen
automatically)
Any extra options will be passed to pg_dump.  The dump will be written
to the standard output.



pgsql-general by date:

Previous
From: Micah Yoder
Date:
Subject: Re: Great Bridge ceases operations
Next
From: Brett Schwarz
Date:
Subject: Re: Problem w/ dumping huge table and no disk space