> My current backups made with pg_dump are currently 1.3GB. I am wondering
> what kind of headaches I will have to deal with once they exceed 2GB.
>
> What will happen with pg_dump on a Linux 2.2.14 i386 kernel when the
output
> exceeds 2GB?
There are some ways around it if your program supports it, I'm not sure if
it works with redirects...
> Currently the dump file is later fed to a 'tar cvfz'. I am thinking that
> instead I will need to pipe pg_dumps output into gzip thus avoiding the
> creation of a file of that size.
Why not just pump the data right into gzip? Something like:
pg_dumpall | gzip --stdout > pgdump.gz
(I'm sure that the more efficient shell scripters will know a better way)
If your data is anything like ours, you will get at least a 5:1
compression ratio, meaning you can actually dump around 10 gigs of data
before you hit the 2 gig file limit.
steve