Hi Team,
We have setup PostgreSQL master-slave topology with Streaming
replication setup.
One of the steps for setting up streaming replication is to do
pg_basebackup on slave from master.
For subsequent update of this database, this step is repeated every
time, deleting the existing data copy of slave and running
pg_basebackup again.
For a huge data size of over 500GB, it takes a lot of time to copy
the data from master to slave.
We were looking for some optimization technique so that it doesnt have
to copy the whole data in every update of the system.
Is there a way to do that? Can somebody throw some light on this?
Subhankar Chattopadhyay
Bangalore, India