Re: Patroni vs pgpool II - Mailing list pgsql-general
From | Jehan-Guillaume de Rorthais |
---|---|
Subject | Re: Patroni vs pgpool II |
Date | |
Msg-id | 20230407124612.7a6d7231@karst Whole thread Raw |
In response to | Re: Patroni vs pgpool II (Tatsuo Ishii <ishii@sraoss.co.jp>) |
Responses |
Re: Patroni vs pgpool II
Re: Patroni vs pgpool II |
List | pgsql-general |
On Fri, 07 Apr 2023 18:04:05 +0900 (JST) Tatsuo Ishii <ishii@sraoss.co.jp> wrote: > > And I believe that's part of what Cen was complaining about: > > > > « > > It is basically a daemon glued together with scripts for which you are > > entirely responsible for. Any small mistake in failover scripts and > > cluster enters a broken state. > > » > > > > If you want to build something clean, including fencing, you'll have to > > handle/dev it by yourself in scripts > > That's a design decision. This gives maximum flexibility to users. Sure, no problem with that. But people has to realize that the downside is that it left the whole complexity and reliability of the cluster in the hands of the administrator. And these are much more complicated and racy than a simple promote node. Even dealing with a simple vIP can become a nightmare if not done correctly. > Please note that we provide step-by-step installation/configuration > documents which has been used by production systems. > > https://www.pgpool.net/docs/44/en/html/example-cluster.html These scripts rely on SSH, which is really bad. What if you have a SSH failure in the mix? Moreover, even if SSH wouldn't be a weakness by itself, the script it doesn't even try to shutdown the old node or stop the old primary. You can add to the mix that both Pgpool and SSH rely on TCP for availability checks and actions. You better have very low TCP timeout/retry... When a service lose quorum on a resource, it is supposed to shutdown as fast as possible... Or even self-fence itself using a watchdog device if the shutdown action doesn't return fast enough. > >> However I am not sure STONITH is always mandatory. > > > > Sure, it really depend on how much risky you can go and how much complexity > > you can afford. Some cluster can leave with a 10 minute split brain where > > some other can not survive a 5s split brain. > > > >> I think that depends what you want to avoid using fencing. If the purpose > >> is to avoid having two primary servers at the same time, Pgpool-II achieve > >> that as described above. > > > > How could you be so sure? > > > > See https://www.alteeve.com/w/The_2-Node_Myth > > > > « > > * Quorum is a tool for when things are working predictably > > * Fencing is a tool for when things go wrong > > I think the article does not apply to Pgpool-II. It is a simple example using NFS. The point here is that when things are getting unpredictable, Quorum is just not enough. So yes, it does apply to Pgpool. Quorum is nice when nodes can communicate with each others, when they have enough time and/or minimal load to complete actions correctly. My point is that a proper cluster with a anti-split-brain solution required need both quorum and fencing. > [...] > > Later, node 1 recovers from its hang. > > Pgpool-II does not allow an automatic recover. This example neither. There's no automatic recover. It just state that node 1 was unable to answer in a timely fashion, just enough for a new quorum to be formed and elect a new primary. But node 1 was not dead, and when node 1 is able to answer, boom. Service being muted for some period of time is really common. There's various articles/confs feedback about cluster failing-over wrongly because of eg. a high load on the primary... Last one was during the fosdem iirc. > If node 1 hangs and once it is recognized as "down" by other nodes, it will > not be used without manual intervention. Thus the disaster described above > will not happen in pgpool. Ok, so I suppose **all** connections, scripts, softwares, backups, maintenances and admins must go through Pgpool to be sure to hit the correct primary. This might be acceptable in some situation, but I wouldn't call that an anti-split-brain solution. It's some kind of «software hiding the rogue node behind a curtain and pretend it doesn't exist anymore» Regards,
pgsql-general by date: