Re: pg_dump's over 2GB - Mailing list pgsql-general

From Jeff Hoffmann
Subject Re: pg_dump's over 2GB
Date
Msg-id 39D4C64F.378F09BB@propertykey.com
Whole thread Raw
In response to pg_dump's over 2GB  ("Bryan White" <bryan@arcamax.com>)
Responses Re: pg_dump's over 2GB
List pgsql-general
Bryan White wrote:
>
> I am thinking that
> instead I will need to pipe pg_dumps output into gzip thus avoiding the
> creation of a file of that size.
>

sure, i do it all the time.  unfortunately, i've had it happen a few
times where even gzipping a database dump goes over 2GB, which is a real
PITA since i have to dump some tables individually.  generally, i do
something like
    pg_dump database | gzip > database.pgz
to dump the database and
    gzip -dc database.pgz | psql database
to restore it.  i've always thought that compress should be an option
for pg_dump, but it's really not that much more work to just pipe the
input and output through gzip.

--

Jeff Hoffmann
PropertyKey.com

pgsql-general by date:

Previous
From: "Adam Lang"
Date:
Subject: Fw: Redhat 7 and PgSQL
Next
From: "Ross J. Reedstrom"
Date:
Subject: Re: pg_dump's over 2GB