Re: pg_dump's over 2GB - Mailing list pgsql-general

From Steve Wolfe
Subject Re: pg_dump's over 2GB
Date
Msg-id 004101c02a33$1913ca80$50824e40@iboats.com
Whole thread Raw
In response to pg_dump's over 2GB  ("Bryan White" <bryan@arcamax.com>)
List pgsql-general
> My current backups made with pg_dump are currently 1.3GB.  I am wondering
> what kind of headaches I will have to deal with once they exceed 2GB.
>
> What will happen with pg_dump on a Linux 2.2.14 i386 kernel when the
output
> exceeds 2GB?

  There are some ways around it if your program supports it, I'm not sure if
it works with redirects...

> Currently the dump file is later fed to a 'tar cvfz'.  I am thinking that
> instead I will need to pipe pg_dumps output into gzip thus avoiding the
> creation of a file of that size.

   Why not just pump the data right into gzip?  Something like:

pg_dumpall | gzip --stdout > pgdump.gz

  (I'm sure that the more efficient shell scripters will know a better way)

  If your data is anything like ours, you will get at least a 5:1
compression ratio, meaning you can actually dump around 10 gigs of data
before you hit the 2 gig file limit.

steve


pgsql-general by date:

Previous
From: Adam Haberlach
Date:
Subject: Re: pg_dump's over 2GB
Next
From: "Adam Lang"
Date:
Subject: Re: Redhat 7 and PgSQL