Re: PG replication across DataCenters - Mailing list pgsql-general

From Bill Moran
Subject Re: PG replication across DataCenters
Date
Msg-id 20131229120846.ecedab9bc2a61b937871c170@potentialtech.com
Whole thread Raw
In response to Re: PG replication across DataCenters  (Sameer Kumar <sameer.kumar@ashnik.com>)
Responses Re: PG replication across DataCenters
List pgsql-general
On Mon, 30 Dec 2013 00:15:37 +0800 Sameer Kumar <sameer.kumar@ashnik.com> wrote:

> >> > * Quick and easy movement of the master to any of the database in
> >> >
> >> >   the cluster without destroying replication.
> >> >
> >> > Again, which version? Re-mastering is made simple in v9.3.
>
> >> I'm not seeing that in the documentation.  In fact, what I'm finding
> >> seems to suggest the opposite: that each node's master is configured
> >> in a config file, so in the case of a complicated replication setup,
> >> I would have to run around editing config files on multiple servers
> >> to move the master ... unless I'm missing something in the documentation.
>
> Well, the pain can be minimized if you can write some simple shell scripts
> for this. Or if you can have a floating/virtual IP.

This is probably the only point that we're not seeing eye to eye on.

Take a real scenario I have to maintain.  There is a single master and 11
replicas spread across 2 datacenters.  Some of these replicas are read-only
for the application, 1 is for analytics, another supports development,
another is a dedicated backup system.  The rest are purely for DR..

Now, in a facility failure scenario, all is well, we just promote
the DR master in the secondary datacenter and go back to work -- this should
be equally easy with either Slony or streaming

What I don't see streaming working for is DR drills.  I need to, in a
controlled manner, move the entire application to the secondary datacenter,
while keeping all the nodes in sync, make sure everything operates properly
from there (which means allowing database updates), then move it all back
to the primary datacenter, without losing sync on any slaves (this is a 2T
database, which I'm sure isn't the largest anyone has dealt with, but it
means that reseeding slaves is a multi-hour endeavour).  With Slony, these
drills are easy: a single slonik command relocates the master to the DR
datacenter while keeping everything in sync, and when testing is complete,
another slonik command puts everything back the way it was, without any
data loss and with minimal chance for human error.

If you feel that the current implementation of streaming replication is
able to do that task, then I'll have to move up my timetable to re-evaluate
it.  It _has_ been a few versions since I've taken a good look at it.

--
Bill Moran <wmoran@potentialtech.com>


pgsql-general by date:

Previous
From: Sameer Kumar
Date:
Subject: Re: PG replication across DataCenters
Next
From: Sameer Kumar
Date:
Subject: Re: PG replication across DataCenters