Re: Synchronizing slots from primary to standby - Mailing list pgsql-hackers
From | Masahiko Sawada |
---|---|
Subject | Re: Synchronizing slots from primary to standby |
Date | |
Msg-id | CAD21AoDj24e8TAQucVT7ZuZn81snddimACMf=uD0ugh4p88GVw@mail.gmail.com Whole thread Raw |
In response to | Re: Synchronizing slots from primary to standby (Amit Kapila <amit.kapila16@gmail.com>) |
Responses |
Re: Synchronizing slots from primary to standby
|
List | pgsql-hackers |
On Thu, Feb 1, 2024 at 12:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote: > > On Wed, Jan 31, 2024 at 9:20 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote: > > > > On Wed, Jan 31, 2024 at 7:42 PM Amit Kapila <amit.kapila16@gmail.com> wrote: > > > > > > > > > Considering my previous where we don't want to restart for a required > > > parameter change, isn't it better to avoid repeated restart (say when > > > the user gave an invalid dbname)? BTW, I think this restart interval > > > is added based on your previous complaint [1]. > > > > I think it's useful that the slotsync worker restarts immediately when > > a required parameter is changed but waits to restart when it exits > > with an error. IIUC the apply worker does so; if it restarts due to a > > subscription parameter change, it resets the last-start time so that > > the launcher will restart it without waiting. > > > > Agreed, this idea sounds good to me. > > > > > > > > > > > > --- > > > > When I dropped a database on the primary that has a failover slot, I > > > > got the following logs on the standby: > > > > > > > > 2024-01-31 17:25:21.750 JST [1103933] FATAL: replication slot "s" is > > > > active for PID 1103935 > > > > 2024-01-31 17:25:21.750 JST [1103933] CONTEXT: WAL redo at 0/3020D20 > > > > for Database/DROP: dir 1663/16384 > > > > 2024-01-31 17:25:21.751 JST [1103930] LOG: startup process (PID > > > > 1103933) exited with exit code 1 > > > > > > > > It seems that because the slotsync worker created the slot on the > > > > standby, the slot's active_pid is still valid. > > > > > > > > > > But we release the slot after sync. And we do take a shared lock on > > > the database to make the startup process wait for slotsync. There is > > > one gap which is that we don't reset active_pid for temp slots in > > > ReplicationSlotRelease(), so for temp slots such an error can occur > > > but OTOH, we immediately make the slot persistent after sync. As per > > > my understanding, it is only possible to get this error if the initial > > > sync doesn't happen and the slot remains temporary. Is that your case? > > > How did reproduce this? > > > > I created a failover slot manually on the primary and dropped the > > database where the failover slot is created. So this would not happen > > in normal cases. > > > > Right, it won't happen in normal cases (say for walsender). This can > happen in some cases even without this patch as noted in comments just > above active_pid check in ReplicationSlotsDropDBSlots(). Now, we need > to think whether we should just update the comments above active_pid > check to explain this case or try to engineer some solution for this > not-so-common case. I guess if we want a solution we need to stop > slotsync worker temporarily till the drop database WAL is applied or > something like that. > > > BTW I've tested the following switch/fail-back scenario but it seems > > not to work fine. Am I missing something? > > > > Setup: > > node1 is the primary, node2 is the physical standby for node1, and > > node3 is the subscriber connecting to node1. > > > > Steps: > > 1. [node1]: create a table and a publication for the table. > > 2. [node2]: set enable_syncslot = on and start (to receive WALs from node1). > > 3. [node3]: create a subscription with failover = true for the publication. > > 4. [node2]: promote to the new standby. > > 5. [node3]: alter subscription to connect the new primary, node2. > > 6. [node1]: stop, set enable_syncslot = on (and other required > > parameters), then start as a new standby. > > > > Then I got the error "exiting from slot synchronization because same > > name slot "test_sub" already exists on the standby". > > > > The logical replication slot that was created on the old primary > > (node1) has been synchronized to the old standby (node2). Therefore on > > node2, the slot's "synced" field is true. However, once node1 starts > > as the new standby with slot synchronization, the slotsync worker > > cannot synchronize the slot because the slot's "synced" field on the > > primary is false. > > > > Yeah, we avoided doing anything in this case because the user could > have manually created another slot with the same name on standby. > Unlike WAL slots can be modified on standby as we allow decoding on > standby, so we can't allow to overwrite the existing slots. We won't > be able to distinguish whether the existing slot was a slot that the > user wants to sync with primary or a slot created on standby to > perform decoding. I think in this case user first needs to drop the > slot on new standby. Yes, but if we do a switch-back further (i.e. in above case, node1 backs to the primary again and node becomes the standby again), the user doesn't need to remove failover slots since they are already marked as "synced". I wonder if we could do something automatically to reduce the user's operation. Also, If we support slot synchronization feature also on a cascading standby in the future, this operation will have to be changed. Regards, -- Masahiko Sawada Amazon Web Services: https://aws.amazon.com
pgsql-hackers by date: