Thread: Standby registration
(starting yet another thread to stay focused) Having mulled through all the recent discussions on synchronous replication, ISTM there is pretty wide consensus that having a registry of all standbys in the master is a good idea. Even those who don't think it's *necessary* for synchronous replication seem to agree that it's nevertheless a pretty intuitive way to configure it. And it has some benefits even if we never get synchronous replication. So let's put synchronous replication aside for now, and focus on standby registration first. Once we have that, the synchronous replication patch will be much smaller and easier to review. The consensus seems to be use a configuration file called standby.conf. Let's use the "ini file format" for now [1]. Aside from synchronous replication, there are three nice things we can do with a standby registry: A) Make monitoring easier. Let's have a system view to show the status of all standbys [2]. B) Improve authorization. At the moment, we require superuser rights to connect for connecting in replication mode. That's pretty ugly, because superuser rights imply that you can do anything; you'd typically want to restrict access from the standby to do replication only and nothing else. You can lock it down in pg_hba.conf to allow the superuser to only connect in replication mode, but it still gives me the creeps. When each standby has a name, we can associate standbys with roles, so that you have to be user X to replicate as standby Y. C) Smarter replacement for wal_keep_segments. Instead of always keeping wal_keep_segments WAL files around, once we know how far each standby has replicated, we can keep just the right amount. I think we'll still want a global upper limit to avoid running out of disk space in the master in case of emergency though. Any volunteers on implementing that? Fujii-san? [1] http://archives.postgresql.org/pgsql-hackers/2010-09/msg01195.php [2] http://archives.postgresql.org/pgsql-hackers/2010-09/msg00932.php -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com
On Wed, Sep 22, 2010 at 5:43 PM, Heikki Linnakangas <heikki.linnakangas@enterprisedb.com> wrote: > So let's put synchronous replication aside for now, and focus on standby > registration first. Once we have that, the synchronous replication patch > will be much smaller and easier to review. Though I agree with standby registration, I'm still unclear what's standby registration ;) What if the number of standby entries in standby.conf is more than max_wal_senders? This situation is allowed if we treat standby.conf as just access control list like pg_hba.conf. But if we have to ensure that all the registered standbys can connect to the master, we should emit the error in that case. Should we allow standby.conf to be changed and reloaded while the server is running? This seems to be required if we use standby.conf for replacement of wal_keep_segments. Because we need to register the backup starting location as the last receive location of upcoming standby when taking a base backup for that standby. But what if the reloaded standby.conf has no entry for already connected standby? If we treat standby.conf as just access control list, we can easily allow it to be reloaded as well as pg_hba.conf is. Otherwise, we would need a careful design. Should we allow multiple standbys with the same name to connect to the master? That is, entry in standby.conf and real standby should be one-to-one relationship? Or we should add new parameter specifying the number of standbys with the name? > Any volunteers on implementing that? Fujii-san? I'm willing to implement that. But I'll be busy for a few days because of presentation at LinuxCon and so on. So please feel free to try that if time allows. Regards, -- Fujii Masao NIPPON TELEGRAPH AND TELEPHONE CORPORATION NTT Open Source Software Center
On Wed, Sep 22, 2010 at 8:21 AM, Fujii Masao <masao.fujii@gmail.com> wrote: > What if the number of standby entries in standby.conf is more than > max_wal_senders? This situation is allowed if we treat standby.conf > as just access control list like pg_hba.conf. But if we have to ensure > that all the registered standbys can connect to the master, we should > emit the error in that case. I don't think a cross-check between these settings makes much sense. We should either get rid of max_wal_senders and make it always equal to the number of defined standbys, or we should treat them as independent settings. > Should we allow standby.conf to be changed and reloaded while the > server is running? Yes. > But what if the > reloaded standby.conf has no entry for already connected standby? We kick him out? > Should we allow multiple standbys with the same name to connect to > the master? No. The point of naming them is to uniquely identify them. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise Postgres Company
On 22/09/10 16:54, Robert Haas wrote: > On Wed, Sep 22, 2010 at 8:21 AM, Fujii Masao<masao.fujii@gmail.com> wrote: >> What if the number of standby entries in standby.conf is more than >> max_wal_senders? This situation is allowed if we treat standby.conf >> as just access control list like pg_hba.conf. But if we have to ensure >> that all the registered standbys can connect to the master, we should >> emit the error in that case. > > I don't think a cross-check between these settings makes much sense. > We should either get rid of max_wal_senders and make it always equal > to the number of defined standbys, or we should treat them as > independent settings. Even with registration, we will continue to support anonymous asynchronous standbys that just connect and start streaming. We need some headroom for those. >> But what if the >> reloaded standby.conf has no entry for already connected standby? > > We kick him out? Sounds reasonable. >> Should we allow multiple standbys with the same name to connect to >> the master? > > No. The point of naming them is to uniquely identify them. Hmm, that situation can arise if there's a network glitch which leads the standby to disconnect, but the master still considers the connection as alive. When the standby reconnects, the master will see two simultaneous connections from the same standby. In that scenario, you clearly want to disconnect the old connetion in favor of the new one. Is there a scenario where you'd want to keep the old connection instead and refuse the new one? Perhaps that should be made configurable, so that you wouldn't need to list all standbys in the config file if you don't want to. Then you don't get any of the benefits of standby registration, though. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com
On Wed, Sep 22, 2010 at 10:19 AM, Heikki Linnakangas <heikki.linnakangas@enterprisedb.com> wrote: >> No. The point of naming them is to uniquely identify them. > > Hmm, that situation can arise if there's a network glitch which leads the > standby to disconnect, but the master still considers the connection as > alive. When the standby reconnects, the master will see two simultaneous > connections from the same standby. In that scenario, you clearly want to > disconnect the old connetion in favor of the new one. +1 for making that the behavior. > Is there a scenario > where you'd want to keep the old connection instead and refuse the new one? I doubt it. > Perhaps that should be made configurable, so that you wouldn't need to list > all standbys in the config file if you don't want to. Then you don't get any > of the benefits of standby registration, though. I think it's fine to have async slaves that don't want any special features (like sync rep, or tracking how far behind they are in the xlog stream) not mentioned in the config file. But allowing multiple slaves with the same name seems like complexity without any attendant benefit. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise Postgres Company
On Wed, Sep 22, 2010 at 10:19 AM, Heikki Linnakangas <heikki.linnakangas@enterprisedb.com> wrote: >>> Should we allow multiple standbys with the same name to connect to >>> the master? >> >> No. The point of naming them is to uniquely identify them. > > Hmm, that situation can arise if there's a network glitch which leads the > standby to disconnect, but the master still considers the connection as > alive. When the standby reconnects, the master will see two simultaneous > connections from the same standby. In that scenario, you clearly want to > disconnect the old connetion in favor of the new one. Is there a scenario > where you'd want to keep the old connection instead and refuse the new one? $Bob turns restores a backup image of the slave to test some new stuff in a dev environment, and it automatically connects. Thanks to IPv4 and the NAT often necessary, they both *appear* to the real master as the same IP address, even though, in the remote campus, they are on to seperate "networks", all NATed through the 1 IP address... Now, that's not (likely) to happen in a "sync rep" situation, but for an async setup, and standby registration automatically being able to keep WAL, etc, satellite offices with occasional network hickups (and developper above mentioned developer VMs) make registration (and centralized monitoring of the slaves) very interesting... a.
Heikki Linnakangas wrote: > (starting yet another thread to stay focused) > > Having mulled through all the recent discussions on synchronous > replication, ISTM there is pretty wide consensus that having a registry > of all standbys in the master is a good idea. Even those who don't think > it's *necessary* for synchronous replication seem to agree that it's > nevertheless a pretty intuitive way to configure it. And it has some > benefits even if we never get synchronous replication. > > So let's put synchronous replication aside for now, and focus on standby > registration first. Once we have that, the synchronous replication patch > will be much smaller and easier to review. > > The consensus seems to be use a configuration file called standby.conf. > Let's use the "ini file format" for now [1]. > > Aside from synchronous replication, there are three nice things we can > do with a standby registry: > > A) Make monitoring easier. Let's have a system view to show the status > of all standbys [2]. It would be interesting if we could fire triggers on changes to that status view. I can see that solving many user management needs. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. +
Heikki Linnakangas <heikki.linnakangas@enterprisedb.com> writes: > Hmm, that situation can arise if there's a network glitch which leads the > standby to disconnect, but the master still considers the connection as > alive. When the standby reconnects, the master will see two simultaneous > connections from the same standby. In that scenario, you clearly want to > disconnect the old connetion in favor of the new one. Is there a scenario > where you'd want to keep the old connection instead and refuse the new > one? Protection against spoofing? If connecting with the right IP is all it takes… Regards, -- dim
On 23/09/10 12:32, Dimitri Fontaine wrote: > Heikki Linnakangas<heikki.linnakangas@enterprisedb.com> writes: >> Hmm, that situation can arise if there's a network glitch which leads the >> standby to disconnect, but the master still considers the connection as >> alive. When the standby reconnects, the master will see two simultaneous >> connections from the same standby. In that scenario, you clearly want to >> disconnect the old connetion in favor of the new one. Is there a scenario >> where you'd want to keep the old connection instead and refuse the new >> one? > > Protection against spoofing? If connecting with the right IP is all it takes… You also need to authenticate with a valid username and password, of course. As the patch stands, that needs to be a superuser, but we should aim for smarter authorization than that. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com
Heikki Linnakangas <heikki.linnakangas@enterprisedb.com> writes: > Having mulled through all the recent discussions on synchronous replication, > ISTM there is pretty wide consensus that having a registry of all standbys > in the master is a good idea. Even those who don't think it's *necessary* > for synchronous replication seem to agree that it's nevertheless a pretty > intuitive way to configure it. And it has some benefits even if we never get > synchronous replication. Yeah it's nice to have, but I disagree with it being a nice way to configure it. I still think that in the long run it's more hassle than a distributed setup to maintain. > The consensus seems to be use a configuration file called > standby.conf. Let's use the "ini file format" for now [1]. What about automatic registration of standbys? That's not going to fly with the unique global configuration file idea, but that's good news. Automatic registration is a good answer to both your points A) monitoring and C) wal_keep_segments, but needs some more thinking wrt security and authentication. What about having a new GRANT privilege for replication, so that any standby can connect with a non-superuser role as soon as the master's setup GRANTS replication to the role? You still need HBA setup to be accepting the slave, too, of course. Regards, -- dim
On 23/09/10 12:49, Dimitri Fontaine wrote: > Heikki Linnakangas<heikki.linnakangas@enterprisedb.com> writes: >> The consensus seems to be use a configuration file called >> standby.conf. Let's use the "ini file format" for now [1]. > > What about automatic registration of standbys? That's not going to fly > with the unique global configuration file idea, but that's good news. > > Automatic registration is a good answer to both your points A) > monitoring and C) wal_keep_segments, but needs some more thinking wrt > security and authentication. > > What about having a new GRANT privilege for replication, so that any > standby can connect with a non-superuser role as soon as the master's > setup GRANTS replication to the role? You still need HBA setup to be > accepting the slave, too, of course. There's two separate concepts here: 1. Automatic registration. When a standby connects, its information gets permanently added to standby.conf file 2. Unregistered standbys. A standby connects, and its information is not in standby.conf. It's let in anyway, and standby.conf is unchanged. We'll need to support unregistered standbys, at least in asynchronous mode. It's also possible for synchronous standbys, but you can't have the "if the standby is disconnected, don't finish any commits until it reconnects and catches up" behavior without registration. I'm inclined to not do automatic registration, not for now at least. Registering a synchronous standby should not be taken lightly. If the standby gets accidentally added to standby.conf, the master will have to keep more WAL around and might delay all commits, depending on the options used. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com
Heikki Linnakangas <heikki.linnakangas@enterprisedb.com> writes: > There's two separate concepts here: > > 1. Automatic registration. When a standby connects, its information gets > permanently added to standby.conf file > > 2. Unregistered standbys. A standby connects, and its information is not in > standby.conf. It's let in anyway, and standby.conf is unchanged. > > We'll need to support unregistered standbys, at least in asynchronous > mode. It's also possible for synchronous standbys, but you can't have the > "if the standby is disconnected, don't finish any commits until it > reconnects and catches up" behavior without registration. I don't see why we need to support unregistered standbys if we have automatic registration. I'm thinking about that on and off and took time to answer, but I fail to see the reason why you're saying that. What I think we need is an easy way to manually unregister the standby on the master, that would be part of the maintenance routine to disconnect a standby. It seems like an admin function would do, and it so happens that it's how it works with PGQ / londiste. > I'm inclined to not do automatic registration, not for now at > least. Registering a synchronous standby should not be taken lightly. If the > standby gets accidentally added to standby.conf, the master will have to > keep more WAL around and might delay all commits, depending on the options > used. For this reason I think we need to have an easy to use facility to check the system health that includes showing how many WALs are currently kept and which standby is registered to still need them. If you happen you have forgotten to unregister your standby, time to call that admin function from above. Regards, -- dim
On Thu, Sep 23, 2010 at 6:49 PM, Dimitri Fontaine <dfontaine@hi-media.com> wrote: > Automatic registration is a good answer to both your points A) > monitoring and C) wal_keep_segments, but needs some more thinking wrt > security and authentication. Aside from standby registration itself, I have another thought for C). Keeping many WAL files in pg_xlog of the master is not good design in the first place. I cannot believe that pg_xlog in most systems has enough capacity to store many WAL files for the standby. Usually the place where many WAL files can be stored is the archive. So I've been thinking to make walsender send the archived WAL file to the standby. That is, when the WAL file required for the standby is not found in pg_xlog, walsender restores it from the archive by executing restore_command that users specified. Then walsender read the WAL file and send it. Currently, if pg_xlog is not enough large in your system, you have to struggle with the setup of warm-standby environment on streaming replication, to prevent the WAL files still required for the standby from being deleted before shipping. Many people would be disappointed about that fact. The archived-log-shipping approach cuts out the need of setup of warm-standby and wal_keep_segments. So that would make streaming replication easier to use. Thought? Regards, -- Fujii Masao NIPPON TELEGRAPH AND TELEPHONE CORPORATION NTT Open Source Software Center
On 29.09.2010 11:46, Fujii Masao wrote: > Aside from standby registration itself, I have another thought for C). Keeping > many WAL files in pg_xlog of the master is not good design in the first place. > I cannot believe that pg_xlog in most systems has enough capacity to store many > WAL files for the standby. > > Usually the place where many WAL files can be stored is the archive. So I've > been thinking to make walsender send the archived WAL file to the standby. > That is, when the WAL file required for the standby is not found in pg_xlog, > walsender restores it from the archive by executing restore_command that users > specified. Then walsender read the WAL file and send it. > > Currently, if pg_xlog is not enough large in your system, you have to struggle > with the setup of warm-standby environment on streaming replication, to prevent > the WAL files still required for the standby from being deleted before shipping. > Many people would be disappointed about that fact. > > The archived-log-shipping approach cuts out the need of setup of warm-standby > and wal_keep_segments. So that would make streaming replication easier to use. > Thought? The standby can already use restore_command to fetch WAL files from the archive. I don't see why the master should be involved in that. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com
On Thu, Sep 30, 2010 at 11:32 PM, Heikki Linnakangas <heikki.linnakangas@enterprisedb.com> wrote: > The standby can already use restore_command to fetch WAL files from the > archive. I don't see why the master should be involved in that. To make the standby use restore_command to do that, you have to specify something like scp in archive_command or set up the shared directory (e.g., using NFS server). But I don't want to use both because they make the installation complicated (e.g., I don't want to register the ssh key with no password to use scp the WAL files from the master to the standby. I don't want to purchase extra server for shared directory and set up NFS server). I believe that it's the same reason why you implemented the streaming backup tool. Regards, -- Fujii Masao NIPPON TELEGRAPH AND TELEPHONE CORPORATION NTT Open Source Software Center