On Tuesday, December 13, 2022 6:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
>
> On Tue, Dec 13, 2022 at 4:36 AM Peter Smith <smithpb2250@gmail.com>
> wrote:
> >
> > ~~~
> >
> > 3. pa_set_stream_apply_worker
> >
> > +/*
> > + * Set the worker that required to apply the current streaming transaction.
> > + */
> > +void
> > +pa_set_stream_apply_worker(ParallelApplyWorkerInfo *winfo) {
> > +stream_apply_worker = winfo; }
> >
> > Comment wording seems wrong.
> >
>
> I think something like "Cache the parallel apply worker information."
> may be more suitable here.
Changed.
> Few more similar cosmetic comments:
> 1.
> + /*
> + * Unlock the shared object lock so that the parallel apply worker
> + * can continue to receive changes.
> + */
> + if (!first_segment)
> + pa_unlock_stream(winfo->shared->xid, AccessExclusiveLock);
>
> This comment is missing in the new (0002) patch.
Added.
> 2.
> + if (!winfo->serialize_changes)
> + {
> + if (!first_segment)
> + pa_unlock_stream(winfo->shared->xid, AccessExclusiveLock);
>
> I think we should write some comments on why we are not unlocking when
> serializing changes.
Added.
> 3. Please add a comment like below in the patch to make it clear why in
> stream_abort case we perform locking before sending the message.
> --- a/src/backend/replication/logical/worker.c
> +++ b/src/backend/replication/logical/worker.c
> @@ -1858,6 +1858,13 @@ apply_handle_stream_abort(StringInfo s)
> * worker will wait on the lock for the next set of
> changes after
> * processing the STREAM_ABORT message if it is not
> already waiting
> * for STREAM_STOP message.
> + *
> + * It is important to perform this locking
> before sending the
> + * STREAM_ABORT message so that the leader can
> hold the lock first
> + * and the parallel apply worker will wait for
> the leader to release
> + * the lock. This is the same as what we do in
> + * apply_handle_stream_stop. See Locking
> Considerations atop
> + * applyparallelworker.c.
> */
> if (!toplevel_xact)
Merged.
Attach the new version patch which addressed above comments.
I also slightly refactored logic related to pa_spooled_messages() so that
It doesn't need to wait for a timeout if there are pending spooled messages.
Best regards,
Hou zj