Hi All,
I have two servers each running pg9.4.4 database instances.
I need to determine the best way to keep a large 20gb table on server A
synchronised onto server B...
At the moment, I use pg_dump to periodically dump the table on server A,
then psql to reload into server B. This is fine, but means I have to pull
100% of the table each time rather than just the changes. This option does
not offer real time accuracy on server B.
I have considered using a table trigger on row (update, insert or delete)
and then using db_link or postgres_fdw to sync the changes, but am concerned
that a table trigger is synchronous... so a db_link or fdw could incur a
lengthy delay.
I have also considered using table OIDs to track changes, then just
periodically sync the difference.
I have considered using postgre_fdw and then 'refresh concurrently
materialized view' on server B.
I have considered using logical decoding to read the wal files, then extract
the changes.
Can anyone explain the best way to synchronise JUST the changes on a table
between servers please?
thanks
Jim
--
View this message in context: http://postgresql.nabble.com/Best-way-to-sync-table-DML-between-databases-tp5868715.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.