Thread: High-Availability Question
I hava a db server (mysql) in my web application, it include 550 tables and about 100000 rows in each table. Now, I want to change the db server to postgresql and construct a HA environment. I have two 2U servers to build postgresql server (one is master, the other is slave), and two 1U servers for any purpose about the ha environment. And I expect to use the FreeBSD as the operation system. My request is, when a server fail (no matter the master or slave), the web server can still access (read/write) the database correctly. I have three ideas about the HA environment, does anyone give me some advices? Or guide me to learn more advanced ideas. Thanks. :) First idea: I install the pgpool on each web server (the web server farm include 16 web servers), and configure the pgpool as replication mode. The web application (written by php) access to the db server through the local pgpool daemon. The idea sounds a little unusual, dose it seems workable? I just use very simple sql statment in the web application, no any complex statment. Second idea: I install the pgpool and linux-ha on the two 1U server, and configure the pgpool as replication mode. The web application access to the db server through the pgpool daemon. Third idea: I install the Slony-I, linux-ha and postgresql on the same server of the two 2U servers. The web application access to the db server directly and without pgpool. sincerely, proace
> I hava a db server (mysql) in my web application, it include 550 > tables and about 100000 rows in each table. Now, I want to change the > db server to postgresql and construct a HA environment. > > I have two 2U servers to build postgresql server (one is master, the > other is slave), and two 1U servers for any purpose about the ha > environment. And I expect to use the FreeBSD as the operation system. > > My request is, when a server fail (no matter the master or slave), the > web server can still access (read/write) the database correctly. > > I have three ideas about the HA environment, does anyone give me some advices? > Or guide me to learn more advanced ideas. Thanks. :) > > First idea: > I install the pgpool on each web server (the web server farm include > 16 web servers), and configure the pgpool as replication mode. The web > application (written by php) access to the db server through the local > pgpool daemon. > The idea sounds a little unusual, dose it seems workable? > I just use very simple sql statment in the web application, no any > complex statment. This is not unusual and actualy I think it's a good idea. Also you could enjoy the advantage of the load-balance capability of pgpool in this case. > Second idea: > I install the pgpool and linux-ha on the two 1U server, and configure > the pgpool as replication mode. The web application access to the db > server through the pgpool daemon. I'm not familiar with linux-ha so have no idea if this works or not. > Third idea: > I install the Slony-I, linux-ha and postgresql on the same server of > the two 2U servers. The web application access to the db server > directly and without pgpool. I'm not sure what would happen with this configuration. -- Tatsuo Ishii SRA OSS, Inc. Japan
Hello, > > Third idea: > > I install the Slony-I, linux-ha and postgresql on the same server of > > the two 2U servers. The web application access to the db server > > directly and without pgpool. AFAIK, Slony does not have failover capabilities you need in HA solution: http://gborg.postgresql.org/project/slony1/genpage.php?howto_overview So you will need pgpool layer (or some other connection manager + network monitoring software) to detect failures anyway. Regards, Ivan Zolotukhin
> > First idea: > > I install the pgpool on each web server (the web server farm include > > 16 web servers), and configure the pgpool as replication mode. The web > > application (written by php) access to the db server through the local > > pgpool daemon. > > The idea sounds a little unusual, dose it seems workable? > > I just use very simple sql statment in the web application, no any > > complex statment. > > This is not unusual and actualy I think it's a good idea. Also you > could enjoy the advantage of the load-balance capability of pgpool in > this case. > I say the idea is a little unusual because I'm concerned about the health check status on each pgpool daemon ( on each web server ). If the health check status is different from each pgpool, the data will be not consistency. Does any situation cause of the different health check status? PS: My web server farm and database servers are not in the same subnet, but the network architecture between those subnet is fully redundancy. Even the db server binds two network interface to increase the availability.
Ivan Zolotukhin wrote: > Hello, > >> > Third idea: >> > I install the Slony-I, linux-ha and postgresql on the same server of >> > the two 2U servers. The web application access to the db server >> > directly and without pgpool. > > AFAIK, Slony does not have failover capabilities you need in HA solution: > > http://gborg.postgresql.org/project/slony1/genpage.php?howto_overview > > So you will need pgpool layer (or some other connection manager + > network monitoring software) to detect failures anyway. > I believe the HA scripts would need to call the appropriate slonik scripts to cause the failover to occur "properly" (calling the slonik failover command). I think this is a viable option - albeit dangerous (since some committed transactions might be lost). Normally, you'd also have the HA script take over the IP of the master as well, but if you used pgpool (in master/slave mode) then you could have it tell pgpool to make the switchover for you. Keep in mind that if you ran a separate pgpool server you'd want to ensure that it had a backup as well. pgpool is nice, since when a master fails, the changes aren't lost on the slave. However, AFAIK getting to the starting point (where both servers are identical) would require the cluster to be down for a bit. Slony-I does a much better job of this in that it performs the replication itself (it will bring the slave up to a consistent state automagically). However, that can be a time consuming task, depending on how much data you have. Just a thought, but I think it should be possible to use Slony-I to bring a master and slave up to a consistent state, then perform a 'lock set' then a 'wait for event' (to ensure both nodes are consistent), then a disable of slony-I (so both nodes are writeable) then switch from pgpool degeneration mode to replication mode. I think this would give you the best of both worlds - pgpool without the startup downtime, synchronous replication, etc. You'd have a "recovery period" where Slony-I sync'd things up... You'd still have some of the pgpool shortcomings (most notably things like 'select nexval()' type statements and lack of secure authentication...but wouldn't suffer from data loss in the case of a failure. -- Chander Ganesan Open Technology Group, Inc. One Copley Parkway, Suite 210 Morrisville, NC 27560 Phone: 877-258-8987/919-463-0999 http://www.otg-nc.com