Thread: [GENERAL] Setting up replication slave on remote high latency host
Hi,
Now, assuming I get the slave up, how best can I mitigate the slave from dropping out because of latency and being able to recover? Increasing the amount of wal segments would be the best way, correct?
Thoughts and opinions on this please -
I have a db (data dir is 90gb) that I am trying to setup on a replication slave. The slave is on a host which maintains latency over 300ms at all times (wan link).
Other times I have done this setup, I have simply rsync'ed the data dir to another host, set config, ran rsync again and fired up the slave. this works well.
However, my bandwidth to the host in question fluctuates between 800k/sec to 3MB/sec. Performing this initial rsync and then having to rsync again if the replication slave drops out due to network latency is not something I think is going to work in this situation.
Right now I am trying to dump the database, gzip, move across, and import into the new slave (which is configured as a master to perform the initial setup). Ideally I do this dump, move and import during a period of inactivity on the master so the new server will come up and immediately be able to catch up on replication due to lack of activity. However, I have been importing the current db as a test and after 90 minutes it seems to have only got 2/3 of the way. I am not confident this will work but it seems like the most efficient way to start.
Have I missed anything here?
Now, assuming I get the slave up, how best can I mitigate the slave from dropping out because of latency and being able to recover? Increasing the amount of wal segments would be the best way, correct?
Thanks,
On 11/15/2017 6:02 PM, Rory Falloon wrote: > > Right now I am trying to dump the database, gzip, move across, and > import into the new slave (which is configured as a master to perform > the initial setup). Ideally I do this dump, move and import during a > period of inactivity on the master so the new server will come up and > immediately be able to catch up on replication due to lack of > activity. However, I have been importing the current db as a test and > after 90 minutes it seems to have only got 2/3 of the way. I am not > confident this will work but it seems like the most efficient way to > start. you can't use pg_dump to create a slave, as it won't have the same timeline. I would use pg_basebackup, but in general streaming replication over a high latency erratic link will never work real well. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Thank you for that. Back to the drawing board!
On Wed, Nov 15, 2017 at 9:30 PM, John R Pierce <pierce@hogranch.com> wrote:
On 11/15/2017 6:02 PM, Rory Falloon wrote:
Right now I am trying to dump the database, gzip, move across, and import into the new slave (which is configured as a master to perform the initial setup). Ideally I do this dump, move and import during a period of inactivity on the master so the new server will come up and immediately be able to catch up on replication due to lack of activity. However, I have been importing the current db as a test and after 90 minutes it seems to have only got 2/3 of the way. I am not confident this will work but it seems like the most efficient way to start.
you can't use pg_dump to create a slave, as it won't have the same timeline.
I would use pg_basebackup, but in general streaming replication over a high latency erratic link will never work real well.
--
john r pierce, recycling bits in santa cruz
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general