Re: Having trouble configuring a Master with multiple standby Servers in PostgreSQL 9.3.3 - Mailing list pgsql-bugs
From | |
---|---|
Subject | Re: Having trouble configuring a Master with multiple standby Servers in PostgreSQL 9.3.3 |
Date | |
Msg-id | 20140418104306.5a830134ae84016b0174832fdc1a3173.9719844f57.wbe@email11.secureserver.net Whole thread Raw |
List | pgsql-bugs |
<span style=3D"font-family:Verdana; color:#000000; font-size:10= pt;">Sorry folks, we fixed the problem, turned out that in the recover= y.conf file I had primary_conninfo " ... sslmode=3Drequire" This caused the= error "could not connect to the primary server: sslmode value "require" in= valid when SSL support is not compiled" So we just removed that option and = bounced the slave and everything is working now.t= hanks=0A<blockquote id=3D"replyBlockquote" webmail=3D"1" style=3D= "border-left: 2px solid blue; margin-left: 8px; padding-left: 8px; font-siz= e:10pt; color:black; font-family:verdana;">=0A= =0A-------- Original Message --------=0ASubject: Re: [BUGS] Having trou= ble configuring a Master with multiple=0Astandby Servers in PostgreSQL = 9.3.3=0AFrom: <fburgess@= radiantblue.com>=0ADate: Fri, April 18, 2014 8:24 am=0ATo: "= Michael Paquier" <michael.p= aquier@gmail.com>=0ACc: <a href=3D"mailto:pgsql-bugs@postgresql.= org">pgsql-bugs@postgresql.org=0A=0A<span style=3D"font-family:= Verdana; color:#000000; font-size:10pt;">I started the recovery yester= day and it ran overnight and is still running. Is the problem that the mast= er is still producing new archivelogs that the slave is trying to recover, = so that I am currently in a perpetual recovery mode?I can see th= at the most recent archivelog being processed on the master is also being r= ecovered on the slave. Do I need to suspend copying the archivelogs to the = /mnt/server/slave1_archivedir/ directory, or should I wait?<= /div>thanksFreddie<= div> <blockquote id=3D"replyBlockquote" webmail=3D"1" style=3D"bo= rder-left: 2px solid blue; margin-left: 8px; padding-left: 8px; font-size:1= 0pt; color:black; font-family:verdana;"> ------= -- Original Message -------- Subject: Re: [BUGS] Having trouble configu= ring a Master with multiple standby Servers in PostgreSQL 9.3.3 Fro= m: Michael Paquier <<a target=3D"_blank" href=3D"mailto:michael.paquier@= gmail.com">michael.paquier@gmail.com> Date: Thu, April 17, 2014 = 5:28 pm To: <a target=3D"_blank" href=3D"mailto:fburgess@radiantblue.co= m">fburgess@radiantblue.com Cc: <a target=3D"_blank" href=3D"mailto= :pgsql-bugs@postgresql.org">pgsql-bugs@postgresql.org On Fri, = Apr 18, 2014 at 1:19 AM, <<a target=3D"_blank" href=3D"mailto:fburgess@r= adiantblue.com">fburgess@radiantblue.com> wrote: > Hi Michael= , thanks for your reply. > > I discussed this my colleague, a= nd we decided to change the archive_command > to execute a shell scr= ipt. That's wiser as it allows more flexibility. > This wil= l copy the archivelogs from the master to both slaves. Will that > a= void the issue with removing needed WAL files? > slave 1 > ar= chive_cleanup_command =3D 'pg_archivecleanup /mnt/server/slave1_archivedir/= > %r' > slaves #2 > archive_cleanup_command =3D 'pg_a= rchivecleanup /mnt/server/slave2_archivedir/ > %r' > Does thi= s look correct? Looks fine. You are copying each WAL file to a differen= t archive folder, and pg_archivecleanup will clean only the path it use= s for each folder, so there is no risk to have a WAL file removed by on= e slave and needed by the other. > I did a pg_clt reload to= change the archivelog destination from > /mnt/server/master_archive= dir to be redistributed to slave1 and slave2. Do I > need to redo th= is backup step? Not if the slaves have already fetched necessary WAL fi= les from the single master archive before you changed the command. = > psql -c "select pg_start_backup('initial_backup');" > rsyn= c -cvar --inplace --exclude=3D*pg_xlog* > /u01/fiber/postgreSQL_data= /postgres@1.2.3.5:/u01/fiber/postgreSQL_data/ > psql -c " select pg_= stop_backup ();" > > or can I just copy all of the missing ar= chivelog files from the > /mnt/server/master_archivedir to the slave= s, and then restart the slaves in > recovery mode? Taking a new = base backup will be fine. But you actually do not need to do so if your= slaves have already caught up enough. Your slaves are using streaming = replication and are on the same server as the master AFAIU so they shou= ld be fine, but there is always a possibility that they need some WAL f= rom archives if one of them for example was not able to connect to the = master for a long time and master already dropped the necessary WAL fil= es from its pg_xlog. -- Michael -- Sent via pgs= ql-bugs mailing list (<a target=3D"_blank" href=3D"mailto:pgsql-bugs@postgr= esql.org">pgsql-bugs@postgresql.org) To make changes to your subscr= iption: <a target=3D"_blank" href=3D"http://www.postgresql.org/mailpref= /pgsql-bugs">http://www.postgresql.org/mailpref/pgsql-bugs <= /blockquote> =0A=0A
pgsql-bugs by date: