Re: pg_dump slower than pg_restore - Mailing list pgsql-general

From David Wall
Subject Re: pg_dump slower than pg_restore
Date
Msg-id 53B5EC6A.9050806@computer.org
Whole thread Raw
In response to Re: pg_dump slower than pg_restore  (Bosco Rama <postgres@boscorama.com>)
Responses Re: pg_dump slower than pg_restore  (John R Pierce <pierce@hogranch.com>)
Re: pg_dump slower than pg_restore  (Bosco Rama <postgres@boscorama.com>)
List pgsql-general
On 7/3/2014 10:36 AM, Bosco Rama wrote:
> If those large objects are 'files' that are already compressed (e.g.
> most image files and pdf's) you are spending a lot of time trying to
> compress the compressed data ... and failing.
>
> Try setting the compression factor to an intermediate value, or even
> zero (i.e. no dump compression).  For example, to get the 'low hanging
> fruit' compressed:
>      $ pg_dump -Z1 -Fc ...
>
> IIRC, the default value of '-Z' is 6.
>
> As usual your choice will be a run-time vs file-size trade-off so try
> several values for '-Z' and see what works best for you.

That's interesting.  Since I gzip the resulting output, I'll give -Z0 a
try.  I didn't realize that any compression was on by default.

Thanks for the tip...


pgsql-general by date:

Previous
From: Nick Cabatoff
Date:
Subject: Re: Why does autovacuum clean fewer rows than I expect?
Next
From: David Wall
Date:
Subject: Re: pg_dump slower than pg_restore