Re: pg_dump slower than pg_restore - Mailing list pgsql-general

From David Wall
Subject Re: pg_dump slower than pg_restore
Date
Msg-id 53B5F5BA.2070600@computer.org
Whole thread Raw
In response to Re: pg_dump slower than pg_restore  (Bosco Rama <postgres@boscorama.com>)
Responses Re: pg_dump slower than pg_restore  (Bosco Rama <postgres@boscorama.com>)
List pgsql-general
On 7/3/2014 5:13 PM, Bosco Rama wrote:
> If you use gzip you will be doing the same 'possibly unnecessary'
> compression step. Use a similar approach to the gzip command as you
> would for the pg_dump command. That is, use one if the -[0-9] options,
> like this: $ pg_dump -Z0 -Fc ... | gzip -[0-9] ...

Bosco, maybe you can recommend a different approach.  I pretty much run
daily backups that I only have for disaster recovery.  I generally don't
do partials recoveries, so I doubt I'd ever modify the dump output.  I
just re-read the docs about formats, and it's not clear what I'd be best
off with, and "plain" is the default, but it doesn't say it can be used
with pg_restore.

Maybe the --format=c isn't the fastest option for me, and I'm less sure
about the compression.  I do want to be able to restore using pg_restore
(unless plain is the best route, in which case, how do I restore that
type of backup?), and I need to include large objects (--oids), but
otherwise, I'm mostly interested in it being as quick as possible.

Many of the large objects are gzip compressed when stored.  Would I be
better off letting PG do its compression and remove gzip, or turn off
all PG compression and use gzip?  Or perhaps use neither if my large
objects, which take up the bulk of the database, are already compressed?



pgsql-general by date:

Previous
From: Adrian Klaver
Date:
Subject: Re: which odbc version (32 or 64 bit) should be installed in Client ?
Next
From: sunpeng
Date:
Subject: Re: which odbc version (32 or 64 bit) should be installed in Client ?