On Wed, Jun 14, 2006 at 05:18:14PM -0400, John Vincent wrote:
> On 6/14/06, Jim C. Nasby <jnasby@pervasive.com> wrote:
> >
> >On Wed, Jun 14, 2006 at 02:11:19PM -0400, John Vincent wrote:
> >> Out of curiosity, does anyone have any idea what the ratio of actual
> >> datasize to backup size is if I use the custom format with -Z 0
> >compression
> >> or the tar format?
> >
> >-Z 0 should mean no compression.
>
>
> But the custom format is still a binary backup, no?
I fail to see what that has to do with anything...
> Something you can try is piping the output of pg_dump to gzip/bzip2. On
> >some OSes, that will let you utilize 1 CPU for just the compression. If
> >you wanted to get even fancier, there is a parallelized version of bzip2
> >out there, which should let you use all your CPUs.
> >
> >Or if you don't care about disk IO bandwidth, just compress after the
> >fact (though, that could just put you in a situation where pg_dump
> >becomes bandwidth constrained).
>
>
> Unfortunately if we working with our current source box, the 1 CPU is
> already the bottleneck in regards to compression. If I run the pg_dump from
> the remote server though, I might be okay.
Oh, right, forgot about that. Yeah, your best bet could be to use an
external machine for the dump.
--
Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com
Pervasive Software http://pervasive.com work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461