Re: persist logical slots to disk during shutdown checkpoint - Mailing list pgsql-hackers
From | Dilip Kumar |
---|---|
Subject | Re: persist logical slots to disk during shutdown checkpoint |
Date | |
Msg-id | CAFiTN-s0YZa-S1w_=vtzt3RXErmCM858XuK-U=NQgv+39GScsg@mail.gmail.com Whole thread Raw |
In response to | Re: persist logical slots to disk during shutdown checkpoint (Amit Kapila <amit.kapila16@gmail.com>) |
Responses |
Re: persist logical slots to disk during shutdown checkpoint
|
List | pgsql-hackers |
On Wed, Sep 6, 2023 at 12:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote: > > On Wed, Sep 6, 2023 at 9:57 AM Dilip Kumar <dilipbalaut@gmail.com> wrote: > > > > On Wed, Sep 6, 2023 at 9:47 AM Amit Kapila <amit.kapila16@gmail.com> wrote: > > > > > > On Tue, Sep 5, 2023 at 6:00 PM Dilip Kumar <dilipbalaut@gmail.com> wrote: > > > > > > > > Right, it can change and in that case, the check related to > > > confirm_flush LSN will fail during the upgrade. However, the point is > > > that if we don't take spinlock, we need to properly write comments on > > > why unlike in other places it is safe here to check these values > > > without spinlock. > > > > I agree with that, but now also it is not true that we are alway > > reading this under the spin lock for example[1][2], we can see we are > > reading this without spin lock. > > [1] > > StartLogicalReplication > > { > > /* > > * Report the location after which we'll send out further commits as the > > * current sentPtr. > > */ > > sentPtr = MyReplicationSlot->data.confirmed_flush; > > } > > [2] > > LogicalIncreaseRestartDecodingForSlot > > { > > /* candidates are already valid with the current flush position, apply */ > > if (updated_lsn) > > LogicalConfirmReceivedLocation(slot->data.confirmed_flush); > > } > > > > These are accessed only in walsender and confirmed_flush is always > updated by walsender. So, this is clearly okay. Hmm, that's a valid point. > > We can do that but I feel we have to be careful for > > > all future usages of these variables, so, having spinlock makes them > > > follow the normal coding pattern which I feel makes it more robust. > > > Yes, marking dirty via common function also has merits but personally, > > > I find it better to follow the normal coding practice of checking the > > > required fields under spinlock. The other possibility is to first > > > check if we need to mark the slot dirty under spinlock, then release > > > the spinlock, and then call the common MarkDirty function, but again > > > that will be under the assumption that these flags won't change. > > > > Thats true, but we are already making the assumption because now also > > we are taking the spinlock and taking a decision of marking the slot > > dirty. And after that we are releasing the spin lock and if we do not > > have guarantee that it can not concurrently change the many things can > > go wrong no? > > > > Also, note that invalidated field could be updated by startup process > but that is only possible on standby, so it is safe but again that > would be another assumption. Okay, so I also agree to go with the current patch. Because as you said above if we access this without a spin lock outside walsender then we will be making a new exception and I agree with that decision of not making the new exception. Other than that the patch LGTM. -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com
pgsql-hackers by date: