Re: a faster compression algorithm for pg_dump - Mailing list pgsql-hackers

From daveg
Subject Re: a faster compression algorithm for pg_dump
Date
Msg-id 20100415005447.GI23641@sonic.net
Whole thread Raw
In response to Re: a faster compression algorithm for pg_dump  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers
On Tue, Apr 13, 2010 at 03:03:58PM -0400, Tom Lane wrote:
> Joachim Wieland <joe@mcknight.de> writes:
> > If we still cannot do this, then what I am asking is: What does the
> > project need to be able to at least link against such a compression
> > algorithm?
> 
> Well, what we *really* need is a convincing argument that it's worth
> taking some risk for.  I find that not obvious.  You can pipe the output
> of pg_dump into your-choice-of-compressor, for example, and that gets
> you the ability to spread the work across multiple CPUs in addition to
> eliminating legal risk to the PG project.  And in any case the general
> impression seems to be that the main dump-speed bottleneck is on the
> backend side not in pg_dump's compression.

My client uses pg_dump -Fc and produces about 700GB of compressed postgresql
dump nightly from multiple hosts. They also depend on being able to read and
filter the dump catalog. A faster compression algorithm would be a huge
benefit for dealing with this volume.

-dg

-- 
David Gould       daveg@sonic.net      510 536 1443    510 282 0869
If simplicity worked, the world would be overrun with insects.


pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: Win32 timezone matching
Next
From: Bruce Momjian
Date:
Subject: Rogue TODO list created