Hi.
I'm just quoting the answer to a similar question a few weeks ago:
->Hello
->
->I guess "split" and "gzip" are your friends. You can pipe the
->"pgdumpall" to "split" with option to cut the file into pieces.
->Recommended is to use also "gzip" or similar to compress the files.
->
->Hope this helps
->
->--
->Andreas Hödle (Systemadministration)
->
->Kühn & Weyh Software GmbH
->Linnestr. 1-3
->79110 Freiburg
->
->WWW.KWSOFT.DE
Cheers,
Florian
> -----Original Message-----
> From: pgsql-admin-owner@postgresql.org
> [mailto:pgsql-admin-owner@postgresql.org]On Behalf Of Andreas Hödle
> Sent: Friday, December 07, 2001 5:48 PM
> Cc: pgsql-admin@postgresql.org
> Subject: Re: [ADMIN] pgdumpall_file is bigger than 2 Gigabyte
>
>
> "David M. Richter" schrieb:
> >
> > Hello!
> >
> > Ive got a problem!
> > My database has the size of almost 5 Gigabytes.
> > So the dump will take at least 2 Gigs of harddisk.
> > But my Kernel supports only 2 Gig Files!
> >
> > Any experiences with big dumpfiles?
> >
> > Thanks a lot
> >
> > DAvid
>
> Hello
>
> I guess "split" and "gzip" are your friends. You can pipe the
> "pgdumpall" to "split" with option to cut the file into pieces.
> Recommended is to use also "gzip" or similar to compress the files.
>
> Hope this helps
>
> --
> Andreas Hödle (Systemadministration)
>
> Kühn & Weyh Software GmbH
> Linnestr. 1-3
> 79110 Freiburg
>
> WWW.KWSOFT.DE
>
> ---------------------------(end of broadcast)---------------------------
> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org