Re: a faster compression algorithm for pg_dump - Mailing list pgsql-hackers

From Dimitri Fontaine
Subject Re: a faster compression algorithm for pg_dump
Date
Msg-id 87iq7uv3f2.fsf@hi-media-techno.com
Whole thread Raw
In response to Re: a faster compression algorithm for pg_dump  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: a faster compression algorithm for pg_dump  (Bruce Momjian <bruce@momjian.us>)
List pgsql-hackers
Tom Lane <tgl@sss.pgh.pa.us> writes:
> Well, what we *really* need is a convincing argument that it's worth
> taking some risk for.  I find that not obvious.  You can pipe the output
> of pg_dump into your-choice-of-compressor, for example, and that gets
> you the ability to spread the work across multiple CPUs in addition to
> eliminating legal risk to the PG project.

Well, I like -Fc and playing with the catalog to restore in staging
environments only the "interesting" data. I even automated all the
catalog mangling in pg_staging so that I just have to setup which
schema I want, with only the DDL or with the DATA too.
 The fun is when you want to exclude functions that are used in triggers based on the schema where the function lives,
notthe trigger, BTW, but that's another story. 

So yes having both -Fc and another compression facility than plain gzip
would be good news. And benefiting from a better compression in TOAST
would be good too I guess (small size hit, lots faster, would fit).

Summary : my convincing argument is using the dumps for efficiently
preparing development and testing environments from production data,
thanks to -Fc. That includes skipping data to restore.

Regards,
--
dim


pgsql-hackers by date:

Previous
From: Dimitri Fontaine
Date:
Subject: Re: testing HS/SR - 1 vs 2 performance
Next
From: Simon Riggs
Date:
Subject: Re: Hot Standby: Startup at shutdown checkpoint