Hi Sathish, as Michel Pelletier, pointed out, a trigger based approach (i.e slony-I, bucardo) or the pg_logical plugin (requires server restart) is the way to go, personally, I've worked with slony-I, the initial setup is somewhat tricky, but it works, depending on how transactional the table, storage, and network speed, we're talking about seconds of sync difference worst case scenario.
Hi All,
Can you tell me a way for table replication or sync or to achieve minimal downtime from dbserver1 to dbserver2 on Postgresql 9.5
Table Size: 160gb
4VCPU, 16gb RAM
The table size is 160gb. We would like to move/copy this table fro one db server to another db server.
Hi Adrian,
We are exporting live table data to a new database, so we need to stop our application until the export/import is completed. We would like to minimise this downtime.
It's more complicated if you want to keep your application running and writing to the db while migrating. There are trigger-level replication tools, like slony that can be used to stream changes to the new database, and then you switch over once you get both of them to parity, but there are some gotchas. You said the db is only 160GB, it depend a lot on what kind of schema we're talking about, but I imagine it wouldn't take long to just take the downtime and do a normal pg_upgrade.
On 3/31/19 11:09 PM, Sathish Kumar wrote:
> Hi Team,
>
> We have a requirement to copy a table from one database server to
> another database server. We are looking for a solution to achieve this
> with lesser downtime on Prod. Can you help us with this?
So what is creating the downtime now?
In addition to other suggestions you might want to take a look at:
https://www.postgresql.org/docs/9.5/postgres-fdw.html
>
> Table Size: 160GB
> Postgresql Server Version: 9.5
>
>
--
Adrian Klaver
adrian.klaver@aklaver.com
--