Thread: logicalrep_worker_launch -- counting/checking the worker limits

logicalrep_worker_launch -- counting/checking the worker limits

From
Peter Smith
Date:
While reviewing other threads I have been looking more closely at the
the logicalrep_worker_launch() function. IMO the logic of that
function seems not quite right.

Here are a few things I felt are strange:

1. The function knows exactly what type of worker it is launching, but
still, it is calling the worker counting functions
logicalrep_sync_worker_count() and  logicalrep_pa_worker_count() even
when launching a worker of a *different* type.

1a. I think should only count/check the tablesync worker limit when
trying to launch a tablesync worker

1b. I think should only count/check the parallel apply worker limit
when trying to launch a parallel apply worker

~

2. There is some condition for attempting the garbage-collection of workers:

/*
* If we didn't find a free slot, try to do garbage collection.  The
* reason we do this is because if some worker failed to start up and its
* parent has crashed while waiting, the in_use state was never cleared.
*/
if (worker == NULL || nsyncworkers >= max_sync_workers_per_subscription)

The inclusion of that nsyncworkers check here has very subtle
importance. AFAICT this means that even if we *did* find a free
worker, we still need to do garbage collection just in case one of
those 'in_use' tablesync worker was in error (e.g. crashed after
marked in_use). By garbage-collecting (and then re-counting
nsyncworkers) we might be able to launch the tablesync successfully
instead of just returning that it has maxed out.

2a. IIUC that is all fine. The problem is that I think there should be
exactly the same logic for the parallel apply workers here also.

2b. The comment above should explain more about the reason for the
nsyncworkers condition -- the existing comment doesn't really cover
it.

~

3. There is a wrong cut/paste comment in the body of
logicalrep_sync_worker_count().

That comment should be changed to read similarly to the equivalent
comment in logicalrep_pa_worker_count.

------

PSA a patch to address all these items.

This patch is about making the function logically consistent. Removing
some of the redundant countings should also be more efficient in
theory, but in practice, I think the unnecessary worker loops are too
short (max_logical_replication_workers) for any performance
improvements to be noticeable.

Thoughts?

------
Kind Regards,
Peter Smith.
Fujitsu Australia

Attachment

Re: logicalrep_worker_launch -- counting/checking the worker limits

From
Peter Smith
Date:
The previous patch was accidentally not resetting the boolean limit
flags to false for retries.

Fixed in v2.

------
Kind Regards,
Peter Smith.
Fujitsu Australia

Attachment

Re: logicalrep_worker_launch -- counting/checking the worker limits

From
Amit Kapila
Date:
On Fri, Aug 11, 2023 at 2:29 PM Peter Smith <smithpb2250@gmail.com> wrote:
>
> While reviewing other threads I have been looking more closely at the
> the logicalrep_worker_launch() function. IMO the logic of that
> function seems not quite right.
>
> Here are a few things I felt are strange:
>
...
>
> 2. There is some condition for attempting the garbage-collection of workers:
>
> /*
> * If we didn't find a free slot, try to do garbage collection.  The
> * reason we do this is because if some worker failed to start up and its
> * parent has crashed while waiting, the in_use state was never cleared.
> */
> if (worker == NULL || nsyncworkers >= max_sync_workers_per_subscription)
>
> The inclusion of that nsyncworkers check here has very subtle
> importance. AFAICT this means that even if we *did* find a free
> worker, we still need to do garbage collection just in case one of
> those 'in_use' tablesync worker was in error (e.g. crashed after
> marked in_use). By garbage-collecting (and then re-counting
> nsyncworkers) we might be able to launch the tablesync successfully
> instead of just returning that it has maxed out.
>
> 2a. IIUC that is all fine. The problem is that I think there should be
> exactly the same logic for the parallel apply workers here also.
>

Did you try to reproduce this condition, if not, can you please try it
once? I wonder if the leader worker crashed, won't it lead to a
restart of the server?

--
With Regards,
Amit Kapila.



Re: logicalrep_worker_launch -- counting/checking the worker limits

From
Peter Smith
Date:
A rebase was needed due to a recent push [1].

PSA v3.

------
[1] https://github.com/postgres/postgres/commit/2a8b40e3681921943a2989fd4ec6cdbf8766566c

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachment
On Tue, 15 Aug 2023 at 08:09, Peter Smith <smithpb2250@gmail.com> wrote:
>
> A rebase was needed due to a recent push [1].

I have changed the status of the patch to "Waiting on Author" as
Amit's queries at [1] have not been verified and concluded. Please
feel free to address them and change the status back again.

[1] - https://www.postgresql.org/message-id/CAA4eK1LtFyiMV6e9%2BRr66pe5e-MX7Pk6N3iHd4JgcBW1X4kS6Q%40mail.gmail.com

Regards,
Vignesh



Re: logicalrep_worker_launch -- counting/checking the worker limits

From
"Andrey M. Borodin"
Date:

> On 15 Aug 2023, at 07:38, Peter Smith <smithpb2250@gmail.com> wrote:
>
> A rebase was needed due to a recent push [1].
>
> PSA v3.


> On 14 Jan 2024, at 10:43, vignesh C <vignesh21@gmail.com> wrote:
>
> I have changed the status of the patch to "Waiting on Author" as
> Amit's queries at [1] have not been verified and concluded. Please
> feel free to address them and change the status back again.

Hi Peter!

Are you still interested in this thread? If so - please post an answer to Amit's question.
If you are not interested - please Withdraw a CF entry [0].

Thanks!


Best regards, Andrey Borodin.

[0] https://commitfest.postgresql.org/47/4499/


On Sun, Mar 31, 2024 at 8:12 PM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:
>
>
>
> > On 15 Aug 2023, at 07:38, Peter Smith <smithpb2250@gmail.com> wrote:
> >
> > A rebase was needed due to a recent push [1].
> >
> > PSA v3.
>
>
> > On 14 Jan 2024, at 10:43, vignesh C <vignesh21@gmail.com> wrote:
> >
> > I have changed the status of the patch to "Waiting on Author" as
> > Amit's queries at [1] have not been verified and concluded. Please
> > feel free to address them and change the status back again.
>
> Hi Peter!
>
> Are you still interested in this thread? If so - please post an answer to Amit's question.
> If you are not interested - please Withdraw a CF entry [0].
>
> Thanks!

Yeah, sorry for the long period of inactivity on this thread. Although
I still have some interest in it, I don't know when I'll get back to
it again so meantime I've withdrawn this from the CF as requested.

Kind regards,
Peter Smith
Fujitsu Australia