Thread: Optimize streaming replication because of network latency
Hello,
We’ve added a streaming standby on a gcp container. There’s a lot of network latency between our primary in EU and this standby in South America.
I did a big update on a table ( 4 millions rows) and it generates a lot of lag : more than 5 hours.
The standby has great hardware ressources: cpu, ssd disk and had no load during this update.
How could we handle this lag and have less impact, less lag ?
Basic PG Config:
shared_buffers = 8GB
work_mem = 128MB
max_connections = 600
wal_keep_segments = 1000
wal_sender_timeout = 0
replication_timeout=(not set)
wal_receiver_status_interval=10s
max_wal_senders=20
Checkpoint_timeout = 10min
Max_wal_size= 2GB
min_wal_size= 1GB
Thank you
Mai
On 12/10/20 11:41 AM, Mai Peng wrote:
How effectively fast is the pipe between the two servers? Where are the bottlenecks? Is the pipe shared by other users?
Hello,We’ve added a streaming standby on a gcp container. There’s a lot of network latency between our primary in EU and this standby in South America.I did a big update on a table ( 4 millions rows) and it generates a lot of lag : more than 5 hours.The standby has great hardware ressources: cpu, ssd disk and had no load during this update.How could we handle this lag and have less impact, less lag ?
How effectively fast is the pipe between the two servers? Where are the bottlenecks? Is the pipe shared by other users?
Basic PG Config:shared_buffers = 8GBwork_mem = 128MBmax_connections = 600wal_keep_segments = 1000wal_sender_timeout = 0replication_timeout=(not set)wal_receiver_status_interval=10smax_wal_senders=20Checkpoint_timeout = 10minMax_wal_size= 2GBmin_wal_size= 1GBThank youMai
--
Angular momentum makes the world go 'round.
Angular momentum makes the world go 'round.
Hello Ron,
We have 200 ms of latency between primary and standby.
The interconnexion is a VPN and the connexion is shared by several type of data flow : backup, application, replication, database servers.
The interconnexion is a VPN and the connexion is shared by several type of data flow : backup, application, replication, database servers.
Thanks for your help
Le 10 déc. 2020 à 19:00, Ron <ronljohnsonjr@gmail.com> a écrit :On 12/10/20 11:41 AM, Mai Peng wrote:Hello,We’ve added a streaming standby on a gcp container. There’s a lot of network latency between our primary in EU and this standby in South America.I did a big update on a table ( 4 millions rows) and it generates a lot of lag : more than 5 hours.The standby has great hardware ressources: cpu, ssd disk and had no load during this update.How could we handle this lag and have less impact, less lag ?
How effectively fast is the pipe between the two servers? Where are the bottlenecks? Is the pipe shared by other users?Basic PG Config:shared_buffers = 8GBwork_mem = 128MBmax_connections = 600wal_keep_segments = 1000wal_sender_timeout = 0replication_timeout=(not set)wal_receiver_status_interval=10smax_wal_senders=20Checkpoint_timeout = 10minMax_wal_size= 2GBmin_wal_size= 1GBThank youMai--
Angular momentum makes the world go 'round.