Dimitri Fontaine wrote:
> Tom Lane <tgl@sss.pgh.pa.us> writes:
> > Well, what we *really* need is a convincing argument that it's worth
> > taking some risk for. I find that not obvious. You can pipe the output
> > of pg_dump into your-choice-of-compressor, for example, and that gets
> > you the ability to spread the work across multiple CPUs in addition to
> > eliminating legal risk to the PG project.
>
> Well, I like -Fc and playing with the catalog to restore in staging
> environments only the "interesting" data. I even automated all the
> catalog mangling in pg_staging so that I just have to setup which
> schema I want, with only the DDL or with the DATA too.
>
> The fun is when you want to exclude functions that are used in
> triggers based on the schema where the function lives, not the
> trigger, BTW, but that's another story.
>
> So yes having both -Fc and another compression facility than plain gzip
> would be good news. And benefiting from a better compression in TOAST
> would be good too I guess (small size hit, lots faster, would fit).
>
> Summary?: my convincing argument is using the dumps for efficiently
> preparing development and testing environments from production data,
> thanks to -Fc. That includes skipping data to restore.
I assume people realize that if they are using pg_dump -Fc and then
compressing the output later, they should turn off compression in
pg_dump, or is that something we should document/suggest?
-- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB
http://enterprisedb.com