Re: Is it possible to "pip" pg_dump output into new db ? - Mailing list pgsql-general

From Frank
Subject Re: Is it possible to "pip" pg_dump output into new db ?
Date
Msg-id CAMLSce5SWdcizKOHYA7zByWgn8AMpXWc9=CdunEBUxzoPBQROg@mail.gmail.com
Whole thread Raw
In response to Re: Is it possible to "pip" pg_dump output into new db ?  (Raymond O'Donnell <rod@iol.ie>)
List pgsql-general

Ray, Alan,

thanks for your replies. 

We have tested the dump/restore procedure with a few smaller databases and it worked fine. We had a few smaller hiccups with the large database as it required a few modules and special tablespaces before it would start actually copying data. But not a real problem, a test-upgrade is currently running.

We will use this upgrade-cycle to systematically test and evaluate all upgrade options. 

We have an "if-all-else-fails"-full plain dump that we just restored testwise. In this case to see performance but we regularly restore it to make sure it actually works. The file is compressed with rar. Packed size is about 100 GB, unpacked about 1 TB. Uncompressing alone on a decent machine (256GB Ram, 12 Core) took about 1 day. Importing via psql took about 12 hours (fsync off, wal_segments adjusted, etc.). 

Currently we are running the direct pg_dump / pg_restore upgrade from a slave that we just took off wal-replication. As it is progressing i am expecting something in the 12-15 hour range. 

Finally we will try the pg_upgrade-option on a test-slave. I expect that to be quite fast as it more or less just needs to copy the data once and correct/adjust system tables, if i am not mistaken. So that should take about as long as it takes to copy 1 TB of data plus the table-adjustments.

Fortunately we can use the weekend to freeze the database so the reduced-downtime that might be achieved by the slony-approach is not a true requirement and we can avoid the complexities of that approach.


Thanks,

Frank














On Tue, Mar 25, 2014 at 4:46 PM, Raymond O'Donnell <rod@iol.ie> wrote:
On 25/03/2014 13:56, Frank Foerster wrote:
>
> Hi,
>
> we are currently in the process of upgrading a production/live 1 TB
> database from 9.2 to 9.3 via pg_dump, which is quite a lengthy process.
>
> Fortunately we have a capable spare-server so we can restore into a
> clean, freshly setup machine.
>
> I just wondered wether the intermediate step of writing the dump-file
> and re-reading it to have it written to the database is really
> necessary. Is there any way to "pipe" the dump-file directly into the
> new database-process or would such functionality make sense ?

Surely:

  pg_dump [...etc...] | psql [...etc...]

Though I'm sure it will still take a long time for a database of that size.

Another option to explore would be to use Slony, which can replicate
databases between different Postgres versions - one of its design
use-cases is to perform upgrades like this with a minimum of down-time.
You can replicate the database over to the new server, and then
switching need take only seconds once the new one is ready.

Ray.


--
Raymond O'Donnell :: Galway :: Ireland
rod@iol.ie

pgsql-general by date:

Previous
From: Manuel Kniep
Date:
Subject: is there a way to dump the version of extensions
Next
From: John R Pierce
Date:
Subject: Re: Auditing Code - Fortify