Re: pg_dump's over 2GB - Mailing list pgsql-general

From Ross J. Reedstrom
Subject Re: pg_dump's over 2GB
Date
Msg-id 20000929115711.B5635@rice.edu
Whole thread Raw
In response to Re: pg_dump's over 2GB  (Jeff Hoffmann <jeff@propertykey.com>)
List pgsql-general
On Fri, Sep 29, 2000 at 11:41:51AM -0500, Jeff Hoffmann wrote:
> Bryan White wrote:
> >
> > I am thinking that
> > instead I will need to pipe pg_dumps output into gzip thus avoiding the
> > creation of a file of that size.
>
> sure, i do it all the time.  unfortunately, i've had it happen a few
> times where even gzipping a database dump goes over 2GB, which is a real
> PITA since i have to dump some tables individually.  generally, i do


> something like
>     pg_dump database | gzip > database.pgz

Hmm, how about:

pg_dump database | gzip | split -b 1024m - database_

Which will give you 1GB files, named database_aa, database_ab, etc.

> to dump the database and
>     gzip -dc database.pgz | psql database

cat database_* | gunzip | psql database

Ross Reedstrom
--
Open source code is like a natural resource, it's the result of providing
food and sunshine to programmers, and then staying out of their way.
[...] [It] is not going away because it has utility for both the developers
and users independent of economic motivations.  Jim Flynn, Sunnyvale, Calif.

pgsql-general by date:

Previous
From: Jeff Hoffmann
Date:
Subject: Re: pg_dump's over 2GB
Next
From: Peter Eisentraut
Date:
Subject: Re: Redhat 7 and PgSQL