Thread: Streaming Replication 9.2

Streaming Replication 9.2

From
David Greco
Date:

I’ve setup streaming replication between two 9.2 servers, and have a few concerns/questions. I set it up with a very large wal_keep_segments, 17000, and do NOT ship the logs to the standby.

 

When I failover to the slave, MUST the process for bringing back up the former master initially as a slave involve copying a base backup from the new master back to the former master? Even if I am keeping enough wal segments to include all changes since the former slave was promoted? I tried just bringing up the former master as a slave but it complains “timeline 2 of the primary does not match recovery target timeline 1”.  

 

If this is true, what is a good strategy for handling primary/slave in distant geographic locations where copying a base backup between the two is not terribly convenient?  

 

 

 

Re: Streaming Replication 9.2

From
Shaun Thomas
Date:
On 04/11/2013 09:51 AM, David Greco wrote:

> I’ve setup streaming replication between two 9.2 servers, and have a
> few concerns/questions. I set it up with a very large
> wal_keep_segments, 17000, and do NOT ship the logs to the standby.

O_o

> When I failover to the slave, MUST the process for bringing back up the
> former master initially as a slave involve copying a base backup from
> the new master back to the former master?

There's actually been quite a long discussion about just this recently.
There doesn't seem to be a consensus, but currently the answer is yes.
In order for the old master to follow the new one, it has to be re-synced.

For geographical separation, rsync might not cut it depending on how
long replication has been going on. Various hint bits and MVCC would
result in basically every file being copied. I've played around with
this, and it seems to work splendidly for drastically cutting bandwidth
and time required to re-sync across vast distances:

On current master:

tar -C /path/to/pgdata -c . | lzop --fast | nc -l 9999

On replication subscriber:

mkdir /path/to/pgdata
nc MASTER-NODE 9999 | lzop -d | tar -C /path/to/pgdata -x

If you don't care how long it takes, you can replace lzop with lbzip2 or
something you can use in parallel. This will take 4-8x longer, but can
use up to 30% less bandwidth based on tests I've run.

Otherwise, I'd recommend just using pg_basebackup.

--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
sthomas@optionshouse.com

______________________________________________

See http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email