Re: Call for 7.5 feature completion - Mailing list pgsql-hackers
From | Tatsuo Ishii |
---|---|
Subject | Re: Call for 7.5 feature completion |
Date | |
Msg-id | 20040518.095600.74753745.t-ishii@sra.co.jp Whole thread Raw |
In response to | Re: Call for 7.5 feature completion (Jan Wieck <JanWieck@Yahoo.com>) |
List | pgsql-hackers |
> Bruce Momjian wrote: > > Marc G. Fournier wrote: > >> On Mon, 17 May 2004, Bruce Momjian wrote: > >> > >> > > Most hopefully this is very discouraging! Connection pools are a nice > >> > > thing and I have used pgpool recently with great success, for pooling > >> > > connections. But attempting to deliver multimaster replication as a > >> > > byproduct of a connection pool isn't going to become an enterprise > >> > > feature. And the more half-baked, half-functional and half-reliable > >> > > replication attempts there are, the harder it will be to finally get a > >> > > real solution being recognized. > >> > > >> > Well, considering we offer _nothing_ for multi-master right now, I think > >> > it is a valuable project. > >> > >> Connection pooling is *not* multi master ... it doesn't even simulate > >> multi-master ... multi-master, at least as far as I'm aware, means "no > >> point of failure", and connection pooling creates a *single* point of > >> failure ... the pgpool process dies, you've lost all connections to the > >> database ... I think multi-master does nothing with free-from-single-point-of-failure. A multi-master replication system could have its own single point of failure (for example, some systems have single "coordinate server"). On the other hand single-master replication system could avoid single point of failure using some external mechanism (for example UltraMonkey). > > I think people are confusing pgpool with pgcluster. > > > > And you wonder where that's coming from, eh? Tatsuo is advertising > pgpool as a synchronous replication system suitable for failover. > Quoting from the pgpool-1.0 README: Please do not use the word "failover" for pgpool relication functionality. "Failover" means it could continue replication operation with alternative database. pgpool does not do that in replication mode. Instead it disconnect the failed DB and continues operation with healthy DB (with no replication, of course). That's why I use the word "degeneration" in pgpool's replication mode. > pgpool could be used as a replication server. This allows real-time > backuping of the database to avoid disk failures. pgpool sends > exactly same query to each PostgreSQL servers to accomplish > replication. So pgpool can be regarded as a "synchronous > replication server". > > Don't get me wrong, as said pgpool works great for the purpose I tested, > the pooling. But statements like that are causing the confusion here. Could you tell me why above is confusing? If it's really confusing, I'm glad to enhance it. Or are you saying pgpool should not be regrard as having "replicaton facility"? Or you are saying that pgpool is too similar to PGCluster? PGCluster is a multi-master/multi-slave/sync relication system. pgpool is a single-master/single-slave/sync replication. There's a clear distinction. single vs. multi-master is a BIG difference and I have never stated that pgpool is a multi-master replication system. BTW, the reason why I developed pgpool with replication functionality is that there's no single perfect replication solution in the world. Here are my comments for officially released replicatin systems (from my own point of view, of course): 1) DBMirror: good: simple and easy to use. bad: can not handle too much traffic. cannot replicate large objects. 2) PGCluster: good: can handle failover and recovery. SELECT load balancing is really nice. bad: requries many PCs. update performance is not good. cannot replicate large objects. 3) pgpool: good: simple and easy to use. can replicate large objects. update performance is not too bad. bad: noload balancing, no failover. I'm interested in if Slony-I solves all these "bad". I will try it when I have spare time. -- Tatsuo Ishii
pgsql-hackers by date: