Re: Justifying a PG over MySQL approach to a project - Mailing list pgsql-general

From Gauthier, Dave
Subject Re: Justifying a PG over MySQL approach to a project
Date
Msg-id 482E80323A35A54498B8B70FF2B879800437F856C2@azsmsx504.amr.corp.intel.com
Whole thread Raw
In response to Re: Justifying a PG over MySQL approach to a project  ("Greg Sabino Mullane" <greg@turnstep.com>)
Responses Re: Justifying a PG over MySQL approach to a project  (Scott Marlowe <scott.marlowe@gmail.com>)
Re: Justifying a PG over MySQL approach to a project  ("Greg Sabino Mullane" <greg@turnstep.com>)
List pgsql-general
One concern I have about these trigger based replication systems is that I fear it may ping the slave for each and every DML statement separately in time and in a transaction.  My slave will literally be 1400 miles away and all replication communications will be over the net.  If I have a transaction which has 1000 DML statements in it, is this thing going to update the slave 1000 times separately over the net ? (I may not live long enough to see it finish)  Or will it be smart enough to wait until I "commit" then send over a single bundle of 1000 DML?  The time diff will be more than significant.
 
Thanks for all the great input on this!
 
-----Original Message-----
From: pgsql-general-owner@postgresql.org [mailto:pgsql-general-owner@postgresql.org] On Behalf Of Greg Sabino Mullane
Sent: Thursday, December 17, 2009 11:58 AM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Justifying a PG over MySQL approach to a project
 
 
-----BEGIN PGP SIGNED MESSAGE-----                                      
Hash: RIPEMD160                                                         
 
 
> How difficult is it to switch the master's hat from one DB instance
> to another?  Let's say the master in a master-slave scenario goes  
> down but the slave is fine.  Can I designate the slave as being the
> new master, use it for read/write, and then just call the broken   
> master the new slave once it comes back to life (something like that)?
 
Sure. Bucardo slaves are not changed at all, so they are already
read/write and don't need anything special done to "unslave" them.
 
One possible way to handle the scenario is:
 
Assuming three servers:
* A (master) sends changes to B, receives read/write queries
* B (slave) has transaction_read_only set to true, receives read queries
* C has the Bucardo database and daemon
 
Box A goes down suddenly.
 
* Stop Bucardo on box C
* Flip the boxes around in the bucardo.db table
* Do a 'bucardo_ctl validate sync all'
  (This will create the needed triggers on B)
* Set B's transaction_read_only to false
* Point your apps at B instead of A for read/write queries
 
When A comes back up:
 
* DROP SCHEMA bucardo CASCADE; (drops all triggers)
* Set transaction_read_only to true
* Start Bucardo on C
* Once caught up, point read-only queries to A
 
If you are in a rush, you point things to B immediately after A fails,
but you'll have to recopy the entire table data to the slave, as the
triggers won't be in place yet.
 
- --
Greg Sabino Mullane greg@turnstep.com
End Point Corporation http://www.endpoint.com/
PGP Key: 0x14964AC8 200912171153
-----BEGIN PGP SIGNATURE-----
 
iEYEAREDAAYFAksqYqUACgkQvJuQZxSWSsjZtQCfTwbI3f9W0z+82IU7lL+2LwNK
aUYAnj3AMjRDOeFIuHDee4JJemneArie
=75Ho
-----END PGP SIGNATURE-----
 
 
 
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
 

pgsql-general by date:

Previous
From: Scott Marlowe
Date:
Subject: Re: Server Requirements
Next
From: CG
Date:
Subject: Re: pg_dump and ON DELETE CASCADE problem