Thread: validation of hot standby
Hello,
We set up our hot standby by putting master into archive mode and issuing pg_basebackup command, and we are about to start our hot standby instance. What are the things you suggest for validation (assuming it will come up cleanly) and replication caught up?
What do I need to check in addition, making sure the data is clean and not corrupted?
thanks for your help
On 2/28/19 8:00 PM, Julie Nishimura wrote:
Run the program pg_controldata on the secondary server. Specifically look for the last checkpoint timestamp. It will lag some, depending on how checkpoint frequency is configured on the master.
P {margin-top:0;margin-bottom:0;} Hello,We set up our hot standby by putting master into archive mode and issuing pg_basebackup command, and we are about to start our hot standby instance. What are the things you suggest for validation (assuming it will come up cleanly) and replication caught up?
Run the program pg_controldata on the secondary server. Specifically look for the last checkpoint timestamp. It will lag some, depending on how checkpoint frequency is configured on the master.
What do I need to check in addition, making sure the data is clean and not corrupted?thanks for your help
--
Angular momentum makes the world go 'round.
Angular momentum makes the world go 'round.
Hello,
Our current master 9.2 has two active standbys. Can you please help me out with the right sequence of events if we would like to promote one of current standbys to master and convert master to standby?
Your help is appreciated.
Thanks,
Julie
From: Julie Nishimura
Sent: Thursday, February 28, 2019 6:00 PM
To: pgsql-general@lists.postgresql.org
Subject: validation of hot standby
Sent: Thursday, February 28, 2019 6:00 PM
To: pgsql-general@lists.postgresql.org
Subject: validation of hot standby
Hello,
We set up our hot standby by putting master into archive mode and issuing pg_basebackup command, and we are about to start our hot standby instance. What are the things you suggest for validation (assuming it will come up cleanly) and replication caught up?
What do I need to check in addition, making sure the data is clean and not corrupted?
thanks for your help
On Mar 4, 2019, at 1:59 PM, Julie Nishimura <juliezain@hotmail.com> wrote:Hello,Our current master 9.2 has two active standbys. Can you please help me out with the right sequence of events if we would like to promote one of current standbys to master and convert master to standby?
It depends on how you've set things up, of course, but generally you can run "pg_ctl promote" on one of your slaves. This will make that slave start a new timeline and begin to act as a master. To restore your old master to a standby, you will need to rebuild it with something like pg_basebackup, as I do not believe pg_replay is supported with 9.2.
Terminating client connections on the old master to avoid splitbrain and then making those clients reconnect to your new master is left as an exercise to the reader - how you do it really depends on your environment - but be sure you do it right, because splitbrain is a real danger. In a similar vein, you don't want two masters trying to save the same wals to the same wal archive.
Also, if your two slaves are using asynchronous replication, be aware that the slave you choose to become the new master has a chance to be behind the other slave in wal replay when you promote it, unless you take precautions. This would mean your other slave won't be able to sync up with the newly promoted master.
Thank you. I made a mistake, our version is PostgreSQL 9.6.2
From: Ben Chobot <bench@silentmedia.com>
Sent: Monday, March 4, 2019 5:08 PM
To: Julie Nishimura
Cc: pgsql-general@lists.postgresql.org
Subject: Re: master-> 2 hot standbys
Sent: Monday, March 4, 2019 5:08 PM
To: Julie Nishimura
Cc: pgsql-general@lists.postgresql.org
Subject: Re: master-> 2 hot standbys
On Mar 4, 2019, at 1:59 PM, Julie Nishimura <juliezain@hotmail.com> wrote:Hello,Our current master 9.2 has two active standbys. Can you please help me out with the right sequence of events if we would like to promote one of current standbys to master and convert master to standby?
It depends on how you've set things up, of course, but generally you can run "pg_ctl promote" on one of your slaves. This will make that slave start a new timeline and begin to act as a master. To restore your old master to a standby, you will need to rebuild it with something like pg_basebackup, as I do not believe pg_replay is supported with 9.2.
Terminating client connections on the old master to avoid splitbrain and then making those clients reconnect to your new master is left as an exercise to the reader - how you do it really depends on your environment - but be sure you do it right, because splitbrain is a real danger. In a similar vein, you don't want two masters trying to save the same wals to the same wal archive.
Also, if your two slaves are using asynchronous replication, be aware that the slave you choose to become the new master has a chance to be behind the other slave in wal replay when you promote it, unless you take precautions. This would mean your other slave won't be able to sync up with the newly promoted master.
Hello,
I have another question about the scenario we are thinking about.
So, we had master-> standby (both the same version, PostgreSQL 9.6.2 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 5.3.1-14ubuntu2) 5.3.1 20160413, 64-bit). We almost ran out of space, and it was not possible to extend our current storage. So, the decision was made to build new, bigger, standby, and add it as second hot standby (lets call it "db_3"). So, we put our master in archive mode, started log shipment, created second standby with pg_basebackup, rsynced wals, and was able to start second standby. Now, our scenario has master -> 2 standbys. However, our second standby was built on PostgreSQL 9.6.11 on x86_64-pc-linux-gnu (Ubuntu 9.6.11-1.pgdg16.04+1), compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609, 64-bit. Currently replication is up and running from master (db_1) to both standbys (db_b and db_c). Now, we would like to break the replication, and remove db_3 from this picture, and let it function as stand-alone server, leaving the rest of databases to continue reside on on db_1 and replicate to standby db_2 (we have multiple dbs on that server, and there is only one large beast db of 19 TB is our concern, we are trying to run it as stand-alone).
However, we are concerned with the performance on db_3. Currently, we are unable to test writing performance on db_3, since it is read only mode (as standby). If we do what we would like to and let it run for 2-3 days, and in case if we are not satisfied with the performance, how can we fail over back to db_1? Is it even possible (there are about 30 different databases on that cluster, but as I mentioned before, only one, 19 Tb), will be running on db_3, but the rest of databases will be still running on db_1, and they will be running independently?
I dont think it is possible, looks like merging process in case of rollback... How can we test the writing performance on db_3, you think?
Thanks for ANY ideas and suggestions
From: Ben Chobot <bench@silentmedia.com>
Sent: Monday, March 4, 2019 5:08 PM
To: Julie Nishimura
Cc: pgsql-general@lists.postgresql.org
Subject: Re: master-> 2 hot standbys
Sent: Monday, March 4, 2019 5:08 PM
To: Julie Nishimura
Cc: pgsql-general@lists.postgresql.org
Subject: Re: master-> 2 hot standbys
On Mar 4, 2019, at 1:59 PM, Julie Nishimura <juliezain@hotmail.com> wrote:Hello,Our current master 9.2 has two active standbys. Can you please help me out with the right sequence of events if we would like to promote one of current standbys to master and convert master to standby?
It depends on how you've set things up, of course, but generally you can run "pg_ctl promote" on one of your slaves. This will make that slave start a new timeline and begin to act as a master. To restore your old master to a standby, you will need to rebuild it with something like pg_basebackup, as I do not believe pg_replay is supported with 9.2.
Terminating client connections on the old master to avoid splitbrain and then making those clients reconnect to your new master is left as an exercise to the reader - how you do it really depends on your environment - but be sure you do it right, because splitbrain is a real danger. In a similar vein, you don't want two masters trying to save the same wals to the same wal archive.
Also, if your two slaves are using asynchronous replication, be aware that the slave you choose to become the new master has a chance to be behind the other slave in wal replay when you promote it, unless you take precautions. This would mean your other slave won't be able to sync up with the newly promoted master.