Re: Table Export & Import - Mailing list pgsql-general

From Michel Pelletier
Subject Re: Table Export & Import
Date
Msg-id CACxu=vJeHBxKKgWJRMGaCRqpZ6M3qCcR7G3GWnC65UvWLcbFsQ@mail.gmail.com
Whole thread Raw
In response to Re: Table Export & Import  (Sathish Kumar <satcse88@gmail.com>)
Responses Re: Table Export & Import  (Sathish Kumar <satcse88@gmail.com>)
List pgsql-general
On Mon, Apr 1, 2019 at 7:47 AM Sathish Kumar <satcse88@gmail.com> wrote:
Hi Adrian,
We are exporting live table data to a new database, so we need to stop our application until the export/import is completed. We would like to minimise this downtime.

It's more complicated if you want to keep your application running and writing to the db while migrating.  There are trigger-level replication tools, like slony that can be used to stream changes to the new database, and then you switch over once you get both of them to parity, but there are some gotchas.  You said the db is only 160GB, it depend a lot on what kind of schema we're talking about, but I imagine it wouldn't take long to just take the downtime and do a normal pg_upgrade.
 

On Mon, Apr 1, 2019, 10:22 PM Adrian Klaver <adrian.klaver@aklaver.com> wrote:
On 3/31/19 11:09 PM, Sathish Kumar wrote:
> Hi Team,
>
> We have a requirement to copy a table from one database server to
> another database server. We are looking for a solution to achieve this
> with lesser downtime on Prod. Can you help us with this?

So what is creating the downtime now?

In addition to other suggestions you might want to take a look at:

https://www.postgresql.org/docs/9.5/postgres-fdw.html


>
> Table Size: 160GB
> Postgresql Server Version: 9.5
>
>


--
Adrian Klaver
adrian.klaver@aklaver.com

pgsql-general by date:

Previous
From: Adrian Klaver
Date:
Subject: Re: Gigantic load average spikes
Next
From: Sathish Kumar
Date:
Subject: Re: Table Export & Import