On 11/20/2016 9:29 PM, Subhankar Chattopadhyay wrote:
> We have setup PostgreSQL master-slave topology with Streaming
> replication setup.
> One of the steps for setting up streaming replication is to do
> pg_basebackup on slave from master.
>
> For subsequent update of this database, this step is repeated every
> time, deleting the existing data copy of slave and running
> pg_basebackup again.
>
> For a huge data size of over 500GB, it takes a lot of time to copy
> the data from master to slave.
> We were looking for some optimization technique so that it doesnt have
> to copy the whole data in every update of the system.
>
> Is there a way to do that? Can somebody throw some light on this?
if you have streaming replication, why do you delete it and start over ??!?
the streaming replication should replicate all updates of the master in
near realtime to the slave(s).
--
john r pierce, recycling bits in santa cruz