Thread: Failback to old master

Failback to old master

From
"Maeldron T."
Date:
Hello,

I swear I have read a couple of old threads. Yet I am not sure if it safe to failback to the old master in case of async replication without base backup.

Considering:
I have the latest 9.3 server
A: master
B: slave
B is actively connected to A

I shut down A manually with -m fast (it's the default FreeBSD init script setting)
I remove the recovery.conf from B
I restart B
I create a recovery.conf on A
I start A
I see nothing wrong in the logs
I go for a lunch
I shut down B
I remove the recovery.conf on AI restart A
I restore the recovery.conf on B
I start B
I see nothing wrong in the logs and I see that replication is working

Can I say that my data is safe in this case?

If the answer is yes, is it safe to do this if there was a power outage on A instead of manual shutdown? Considering that the log says nothing wrong. (Of course if it complains I'd do base backup from B).

Thank you,

M.

Re: Failback to old master

From
Robert Haas
Date:
On Wed, Oct 29, 2014 at 6:21 AM, Maeldron T. <maeldron@gmail.com> wrote:
> I swear I have read a couple of old threads. Yet I am not sure if it safe to
> failback to the old master in case of async replication without base backup.
>
> Considering:
> I have the latest 9.3 server
> A: master
> B: slave
> B is actively connected to A
>
> I shut down A manually with -m fast (it's the default FreeBSD init script
> setting)
> I remove the recovery.conf from B
> I restart B
> I create a recovery.conf on A
> I start A
> I see nothing wrong in the logs
> I go for a lunch
> I shut down B
> I remove the recovery.conf on AI restart A
> I restore the recovery.conf on B
> I start B
> I see nothing wrong in the logs and I see that replication is working
>
> Can I say that my data is safe in this case?
>
> If the answer is yes, is it safe to do this if there was a power outage on A
> instead of manual shutdown? Considering that the log says nothing wrong. (Of
> course if it complains I'd do base backup from B).

The threshold question here is whether the original master might have
written (and thus, perhaps, applied) write-ahead log records that were
not replayed on the slave.  If A crashed, that is definitely possible,
so this is definitely not safe.  If A was shut down cleanly, then
streaming replication *should* take everything up through the shutdown
checkpoint and replicate those to the standby, which *should* replay
them.  If all goes according to plan, I think this will work.

I'm not sure we really have enough safeties to make this robust,
though: for example, at the point when the shutdown checkpoint is
written, I believe that the master is no longer accepting new
connections - so if the connection to the slave is broken before the
shutdown checkpoint record is replicated, then it's not safe any more,
but how will we detect that?  And, if you remove recovery.conf on the
slave, it will abort replay and enter normal running as soon as it
reaches what it thinks is end-of-WAL, with no cross-check to make sure
that's really the same was point that the master was actually at.  So
it strikes me that it might be quite difficult to really have
confidence that nothing will go wrong.

I'm definitely not the expert in this area on this mailing list, so
I'm curious what others think.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: Failback to old master

From
"Maeldron T."
Date:
Thank you, Robert.

I thought that removing the recovery.conf file makes the slave master only after the slave was restarted. (Unlike creating the a trigger_file). Isn't this true?

I also thought that if there was a crash on the original master and it applied WAL entries on itself that are not presented on the slave then it will throw an error when I try to connect it to the new master (to the old slave).

It would be nice to know as creating a base_backup takes much time.

As for the other case, when there was no crash, safe swapping the master and the slave two times without creating base_backups makes the upgrading of the OS much easier (with only a couple of seconds down-time).

I am afraid to try on until production someone confirms that it's safe. I seems to work though (but I don't like to bet).

M.

2014-10-29 15:41 GMT+01:00 Robert Haas <robertmhaas@gmail.com>:
On Wed, Oct 29, 2014 at 6:21 AM, Maeldron T. <maeldron@gmail.com> wrote:
> I swear I have read a couple of old threads. Yet I am not sure if it safe to
> failback to the old master in case of async replication without base backup.
>
> Considering:
> I have the latest 9.3 server
> A: master
> B: slave
> B is actively connected to A
>
> I shut down A manually with -m fast (it's the default FreeBSD init script
> setting)
> I remove the recovery.conf from B
> I restart B
> I create a recovery.conf on A
> I start A
> I see nothing wrong in the logs
> I go for a lunch
> I shut down B
> I remove the recovery.conf on AI restart A
> I restore the recovery.conf on B
> I start B
> I see nothing wrong in the logs and I see that replication is working
>
> Can I say that my data is safe in this case?
>
> If the answer is yes, is it safe to do this if there was a power outage on A
> instead of manual shutdown? Considering that the log says nothing wrong. (Of
> course if it complains I'd do base backup from B).

The threshold question here is whether the original master might have
written (and thus, perhaps, applied) write-ahead log records that were
not replayed on the slave.  If A crashed, that is definitely possible,
so this is definitely not safe.  If A was shut down cleanly, then
streaming replication *should* take everything up through the shutdown
checkpoint and replicate those to the standby, which *should* replay
them.  If all goes according to plan, I think this will work.

I'm not sure we really have enough safeties to make this robust,
though: for example, at the point when the shutdown checkpoint is
written, I believe that the master is no longer accepting new
connections - so if the connection to the slave is broken before the
shutdown checkpoint record is replicated, then it's not safe any more,
but how will we detect that?  And, if you remove recovery.conf on the
slave, it will abort replay and enter normal running as soon as it
reaches what it thinks is end-of-WAL, with no cross-check to make sure
that's really the same was point that the master was actually at.  So
it strikes me that it might be quite difficult to really have
confidence that nothing will go wrong.

I'm definitely not the expert in this area on this mailing list, so
I'm curious what others think.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Re: Failback to old master

From
Robert Haas
Date:
On Wed, Oct 29, 2014 at 12:43 PM, Maeldron T. <maeldron@gmail.com> wrote:
> Thank you, Robert.
>
> I thought that removing the recovery.conf file makes the slave master only
> after the slave was restarted. (Unlike creating the a trigger_file). Isn't
> this true?

Yes, but after the restart, the slave will also rewind to the most
recent restart-point to begin replay, and some of the sanity checks
that recovery.conf enforces will be lost during that replay.  A safe
way to do this might be to shut down the master, make a note of the
ending WAL position on the master, and then promote the slave (without
shutting it down) once it's reached that point in replay.

> I also thought that if there was a crash on the original master and it
> applied WAL entries on itself that are not presented on the slave then it
> will throw an error when I try to connect it to the new master (to the old
> slave).

I don't think you're going to be that lucky.

> It would be nice to know as creating a base_backup takes much time.

rsync can speed things up by copying only changed data, but yes, it's a problem.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: Failback to old master

From
"Maeldron T."
Date:
Hi,

2014-10-29 17:46 GMT+01:00 Robert Haas <robertmhaas@gmail.com>:

Yes, but after the restart, the slave will also rewind to the most
recent restart-point to begin replay, and some of the sanity checks
that recovery.conf enforces will be lost during that replay.  A safe
way to do this might be to shut down the master, make a note of the
ending WAL position on the master, and then promote the slave (without
shutting it down) once it's reached that point in replay.

As far as I remember (I can’t test it right now but I am 99% sure) promoting the slave makes it impossible to connect the old master to the new one without making a base_backup. The reason is the timeline change. It complains.

The only way to do this is:
1. Stop the master
2. Restart the slave without recovery conf
3. Restart the old master master with a recovery conf.

I have done this a couple of times back and forward and it "worked". I mean it didn't complain.
 

> I also thought that if there was a crash on the original master and it
> applied WAL entries on itself that are not presented on the slave then it
> will throw an error when I try to connect it to the new master (to the old
> slave).

I don't think you're going to be that lucky.

> It would be nice to know as creating a base_backup takes much time.

rsync can speed things up by copying only changed data, but yes, it's a problem.


Actually I am more afraid of rsyncing database data files between the nodes than trusting the postgresql error log. There is no technical reason for that, it's more like psychological.

Is it possible that the new master has unreplicated changes and won't notice that when connecting to the old slave? I thought that wal records might have unique identifiers but I don't know the details.

 

Re: Failback to old master

From
Ants Aasma
Date:
On Tue, Nov 11, 2014 at 11:52 PM, Maeldron T. <maeldron@gmail.com> wrote:
> As far as I remember (I can’t test it right now but I am 99% sure) promoting the slave makes it impossible to connect
theold master to the new one without making a base_backup. The reason is the timeline change. It complains. 

A safely shut down master (-m fast is safe) can be safely restarted as
a slave to the newly promoted master. Fast shutdown shuts down all
normal connections, does a shutdown checkpoint and then waits for this
checkpoint to be replicated to all active streaming clients. Promoting
slave to master creates a timeline switch, that prior to version 9.3
was only possible to replicate using the archive mechanism. As of
version 9.3 you don't need to configure archiving to follow timeline
switches, just add a recovery.conf to the old master to start it up as
a slave and it will fetch everything it needs from the new master.

In case of a unsafe shut down (crash) it is possible that you have WAL
lying around that was not streamed out to the slave. In this case the
old master will request recovery from a point after the timeline
switch and the new master will reply with an error. So it is safe to
try re-adding a crashed master as a slave, but this might fail.
Success is more likely when the whole operating system went down, as
then it's somewhat likely that any WAL got streamed out before it made
it to disk.

In general my suggestion is to avoid slave promotion by removal of
recovery.conf, it's too easy to get confused and end up with hard to
diagnose data corruption.

In your example, if for example B happens to disconnect at WAL
position x1 and remains disconnected while shutdown on A occurred at
WAL position x2 it will be missing the WAL interval A(x1..x2). Now B
is restarted as master from position x1, generates some new WAL past
x2, then A is restarted as slave and starts streaming at x2 as to the
best of it's knowledge that was where things left off. At this point
the slave A is corrupted, you have x1..x2 changes from A that are not
on the master and are also missing some changes that are on the
master. Wrong data and/or crashes ensue.

Always use the promotion mechanism because then you are likely to get
errors when something is screwy. Unfortunately it's still possible to
end up in a corrupted state with no errors, as timeline identifiers
are sequential integers, not GUID's, but at least it's significantly
harder.

Regards,
Ants Aasma
--
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: http://www.postgresql-support.de



Re: Failback to old master

From
Heikki Linnakangas
Date:
On 11/12/2014 03:28 PM, Ants Aasma wrote:
> On Tue, Nov 11, 2014 at 11:52 PM, Maeldron T. <maeldron@gmail.com> wrote:
>> As far as I remember (I can’t test it right now but I am 99% sure) promoting the slave makes it impossible to
connectthe old master to the new one without making a base_backup. The reason is the timeline change. It complains.
 
>
> A safely shut down master (-m fast is safe) can be safely restarted as
> a slave to the newly promoted master. Fast shutdown shuts down all
> normal connections, does a shutdown checkpoint and then waits for this
> checkpoint to be replicated to all active streaming clients.

Right. You have to be careful to make sure the standby really did fully 
catch up with the master, though. If it happens that the replication 
connection is momentarily down when you shut down the master, for 
example, then the master won't wait for the standby. You can use 
pg_controlinfo to verify that, before promoting the standby.

> In case of a unsafe shut down (crash) it is possible that you have WAL
> lying around that was not streamed out to the slave.

Right.

> In this case the
> old master will request recovery from a point after the timeline
> switch and the new master will reply with an error.  So it is safe to
> try re-adding a crashed master as a slave, but this might fail.

Are you sure it's guaranteed to fail, when the master had some WAL that 
was not streamed before the crash? I'm not 100% sure about that. I 
thought the old master might continue streaming and replaying, if there 
just happens to be a record start at the same point in WAL on both 
timelines. I think you'll get an error at the next checkpoint record, 
because its timeline ID isn't what the old master expects, but it 
would've started up already.

> Success is more likely when the whole operating system went down, as
> then it's somewhat likely that any WAL got streamed out before it made
> it to disk.
>
> In general my suggestion is to avoid slave promotion by removal of
> recovery.conf, it's too easy to get confused and end up with hard to
> diagnose data corruption.

Yeah, you can't emphasize that too much. Never remove recovery.conf. 
That removes all the safeguards, and it's very easy to get a database 
that looks OK at first glance, but is in fact corrupt.

- Heikki




Re: Failback to old master

From
Ants Aasma
Date:
On Thu, Nov 13, 2014 at 10:05 AM, Heikki Linnakangas
<hlinnakangas@vmware.com> wrote:
>> In this case the
>> old master will request recovery from a point after the timeline
>> switch and the new master will reply with an error.  So it is safe to
>> try re-adding a crashed master as a slave, but this might fail.
>
>
> Are you sure it's guaranteed to fail, when the master had some WAL that was
> not streamed before the crash? I'm not 100% sure about that. I thought the
> old master might continue streaming and replaying, if there just happens to
> be a record start at the same point in WAL on both timelines. I think you'll
> get an error at the next checkpoint record, because its timeline ID isn't
> what the old master expects, but it would've started up already.

It seems to me like it's guaranteed. Given slave promotion at x1 and
end of xlog on old master at x2, x1 < x2, master will request
streaming at tli1.x2, wal sender does tliSwitchPoint(tli1) to lookup
x1, finds that x1 < x2 and gives the error "requested starting point
%X/%X on timeline %u is not in this server's history". The alignment
of x2 on tli2 does not play a role here.

Regards,
Ants Aasma
--
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: http://www.postgresql-support.de



Re: Failback to old master

From
"Maeldron T."
Date:
On 12/11/14 14:28, Ants Aasma wrote:
> On Tue, Nov 11, 2014 at 11:52 PM, Maeldron T. <maeldron@gmail.com> wrote:
>> As far as I remember (I can’t test it right now but I am 99% sure) promoting the slave makes it impossible to
connectthe old master to the new one without making a base_backup. The reason is the timeline change. It complains.
 
> A safely shut down master (-m fast is safe) can be safely restarted as
> a slave to the newly promoted master. Fast shutdown shuts down all
> normal connections, does a shutdown checkpoint and then waits for this
> checkpoint to be replicated to all active streaming clients. Promoting
> slave to master creates a timeline switch, that prior to version 9.3
> was only possible to replicate using the archive mechanism. As of
> version 9.3 you don't need to configure archiving to follow timeline
> switches, just add a recovery.conf to the old master to start it up as
> a slave and it will fetch everything it needs from the new master.
>
I took your advice and I understood that removing the recovery.conf 
followed by a restart is wrong. I will not do that on my production servers.

However, I can't make it work with promotion. What did I wrong? It was 
9.4beta3.

mkdir 1
mkdir 2
initdb -D 1/
<edit config: change port, wal_level to hot_standby, hot_standby to on, 
max_wal_senders=7, wal_keep_segments=100, uncomment replication in hba.conf>
pg_ctl -D 1/ start
createdb -p 5433
psql -p 5433
pg_basebackup -p 5433 -R -D 2/
mcedit 2/postgresql.conf <change port>
chmod -R 700 1
chmod -R 700 2
pg_ctl -D 2/ start
psql -p 5433
psql -p 5434
<everything works>
pg_ctl -D 1/ stop
pg_ctl -D 2/ promote
psql -p 5434
cp 2/recovery.done 1/recovery.conf
mcedit 1/recovery.conf <change port>
pg_ctl -D 1/ start

LOG:  replication terminated by primary server
DETAIL:  End of WAL reached on timeline 1 at 0/3000AE0.
LOG:  restarted WAL streaming at 0/3000000 on timeline 1
LOG:  replication terminated by primary server
DETAIL:  End of WAL reached on timeline 1 at 0/3000AE0.

This is what I experienced in the past when I tried with promote. The 
old master disconnects from the new. What am I missing?





Re: Failback to old master

From
didier
Date:
Hi,


On Sat, Nov 15, 2014 at 5:31 PM, Maeldron T. <maeldron@gmail.com> wrote:
>> A safely shut down master (-m fast is safe) can be safely restarted as
>> a slave to the newly promoted master. Fast shutdown shuts down all
>> normal connections, does a shutdown checkpoint and then waits for this
>> checkpoint to be replicated to all active streaming clients. Promoting
>> slave to master creates a timeline switch, that prior to version 9.3
>> was only possible to replicate using the archive mechanism. As of
>> version 9.3 you don't need to configure archiving to follow timeline
>> switches, just add a recovery.conf to the old master to start it up as
>> a slave and it will fetch everything it needs from the new master.
>>
> I took your advice and I understood that removing the recovery.conf followed
> by a restart is wrong. I will not do that on my production servers.
>
> However, I can't make it work with promotion. What did I wrong? It was
> 9.4beta3.
>
> mkdir 1
> mkdir 2
> initdb -D 1/
> <edit config: change port, wal_level to hot_standby, hot_standby to on,
> max_wal_senders=7, wal_keep_segments=100, uncomment replication in hba.conf>
> pg_ctl -D 1/ start
> createdb -p 5433
> psql -p 5433
> pg_basebackup -p 5433 -R -D 2/
> mcedit 2/postgresql.conf <change port>
> chmod -R 700 1
> chmod -R 700 2
> pg_ctl -D 2/ start
> psql -p 5433
> psql -p 5434
> <everything works>
> pg_ctl -D 1/ stop
> pg_ctl -D 2/ promote
> psql -p 5434
> cp 2/recovery.done 1/recovery.conf
> mcedit 1/recovery.conf <change port>
> pg_ctl -D 1/ start
>
> LOG:  replication terminated by primary server
> DETAIL:  End of WAL reached on timeline 1 at 0/3000AE0.
> LOG:  restarted WAL streaming at 0/3000000 on timeline 1
> LOG:  replication terminated by primary server
> DETAIL:  End of WAL reached on timeline 1 at 0/3000AE0.
>
> This is what I experienced in the past when I tried with promote. The old
> master disconnects from the new. What am I missing?
>
I think you have to add
recovery_target_timeline = '2'
in recovery.conf
with '2' being the new primary timeline .
cf http://www.postgresql.org/docs/9.4/static/recovery-target-settings.html

Didier



Re: Failback to old master

From
"Maeldron T."
Date:
On 16/11/14 13:13, didier wrote:
> I think you have to add
> recovery_target_timeline = '2'
> in recovery.conf
> with '2' being the new primary timeline .
> cf http://www.postgresql.org/docs/9.4/static/recovery-target-settings.html
>
>
Thank you.

Based on the link I have added:
recovery_target_timeline = 'latest'

And now it works.





Re: Failback to old master

From
"Maeldron T."
Date:


2014-11-13 9:05 GMT+01:00 Heikki Linnakangas <hlinnakangas@vmware.com>:

Right. You have to be careful to make sure the standby really did fully catch up with the master, though. If it happens that the replication connection is momentarily down when you shut down the master, for example, then the master won't wait for the standby. You can use pg_controlinfo to verify that, before promoting the standby.


- Heikki


Dear Heikki,

would you please tell me which line I should check to be 100% sure that everything was sent to the slave when the master was shut down?

Latest checkpoint location:           1F/B842C3D8
Prior checkpoint location:            1F/B837B9B8
Latest checkpoint's REDO location:    1F/B841A050
Latest checkpoint's REDO WAL file:    000000010000001F000000B8
Latest checkpoint's TimeLineID:       1
Latest checkpoint's PrevTimeLineID:   1
Latest checkpoint's full_page_writes: on
Latest checkpoint's NextXID:          0/15845855
Latest checkpoint's NextOID:          450146
Latest checkpoint's NextMultiXactId:  2250
Latest checkpoint's NextMultiOffset:  4803
Latest checkpoint's oldestXID:        984
Latest checkpoint's oldestXID's DB:   1
Latest checkpoint's oldestActiveXID:  15845855
Latest checkpoint's oldestMultiXid:   1
Latest checkpoint's oldestMulti's DB: 1

Is it the first line (Latest checkpoint location) or do I have to check more/else?

I plan to do this on the weekend.

Thank you.

M.