Re: Is it possible to "pip" pg_dump output into new db ? - Mailing list pgsql-general

From Alan Hodgson
Subject Re: Is it possible to "pip" pg_dump output into new db ?
Date
Msg-id 1712698.gmrIz94h6M@skynet.simkin.ca
Whole thread Raw
In response to Is it possible to "pip" pg_dump output into new db ?  (Frank Foerster <fr667766@gmail.com>)
List pgsql-general
On Tuesday, March 25, 2014 02:56:48 PM Frank Foerster wrote:
> Hi,
>
> we are currently in the process of upgrading a production/live 1 TB
> database from 9.2 to 9.3 via pg_dump, which is quite a lengthy process.
>
> Fortunately we have a capable spare-server so we can restore into a clean,
> freshly setup machine.
>
> I just wondered wether the intermediate step of writing the dump-file and
> re-reading it to have it written to the database is really necessary. Is
> there any way to "pipe" the dump-file directly into the new
> database-process or would such functionality make sense ?
>
> I can only speak for us, but each time we do a dump/restore we need to
> extract/copy/move very large files and piping directly into something like
> psql/pg_restore on another machine etc. would greatly reduce
> upgrade-time/pain.
>
> Thanks and best regards,
>
> Frank

Sure. For maximum speed, something like:

pg_dump [options] source_db | pigz - | ssh -e none user@target "gunzip - |
psql [options] target_db"

Depending on your hardware, though, doing a custom backup to a target file and
then using it for a parallel restore would probably overall end up being
faster, plus you get to keep the backup if needed. In my experience, the
restore is a lot slower than the backup.

Slony is also great, to save most of the downtime. At the expense of a lot of
setup and testing time.


pgsql-general by date:

Previous
From: Adrian Klaver
Date:
Subject: Re: Failure upgrading PG 9.2 to 9.3
Next
From: Tom Lane
Date:
Subject: Re: Failure upgrading PG 9.2 to 9.3