Thread: releasing ParallelApplyTxnHash when pa_launch_parallel_worker returns NULL
Hi,
I was reading src/backend/replication/logical/applyparallelworker.c .
In `pa_allocate_worker`, when pa_launch_parallel_worker returns NULL, I think the `ParallelApplyTxnHash` should be released.
Please see the patch.
Thanks
On Tue, Jan 10, 2023 at 9:25 AM Ted Yu <yuzhihong@gmail.com> wrote:
Hi,I was reading src/backend/replication/logical/applyparallelworker.c .In `pa_allocate_worker`, when pa_launch_parallel_worker returns NULL, I think the `ParallelApplyTxnHash` should be released.Please see the patch.Thanks
Here is the patch :-)
Attachment
On Tue, Jan 10, 2023 at 9:26 AM Ted Yu <yuzhihong@gmail.com> wrote:
On Tue, Jan 10, 2023 at 9:25 AM Ted Yu <yuzhihong@gmail.com> wrote:Hi,I was reading src/backend/replication/logical/applyparallelworker.c .In `pa_allocate_worker`, when pa_launch_parallel_worker returns NULL, I think the `ParallelApplyTxnHash` should be released.Please see the patch.ThanksHere is the patch :-)
In `pa_process_spooled_messages_if_required`, the `pa_unlock_stream` call immediately follows `pa_lock_stream`.
I assume the following is the intended sequence of calls. If this is the case, I can add it to the patch.
Cheers
index 2e5914d5d9..9879b3fff2 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -684,9 +684,9 @@ pa_process_spooled_messages_if_required(void)
if (fileset_state == FS_SERIALIZE_IN_PROGRESS)
{
pa_lock_stream(MyParallelShared->xid, AccessShareLock);
- pa_unlock_stream(MyParallelShared->xid, AccessShareLock);
fileset_state = pa_get_fileset_state();
+ pa_unlock_stream(MyParallelShared->xid, AccessShareLock);
}
/*
On Tue, Jan 10, 2023 at 9:43 AM Ted Yu <yuzhihong@gmail.com> wrote:
On Tue, Jan 10, 2023 at 9:26 AM Ted Yu <yuzhihong@gmail.com> wrote:On Tue, Jan 10, 2023 at 9:25 AM Ted Yu <yuzhihong@gmail.com> wrote:Hi,I was reading src/backend/replication/logical/applyparallelworker.c .In `pa_allocate_worker`, when pa_launch_parallel_worker returns NULL, I think the `ParallelApplyTxnHash` should be released.Please see the patch.ThanksHere is the patch :-)In `pa_process_spooled_messages_if_required`, the `pa_unlock_stream` call immediately follows `pa_lock_stream`.I assume the following is the intended sequence of calls. If this is the case, I can add it to the patch.Cheersdiff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index 2e5914d5d9..9879b3fff2 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -684,9 +684,9 @@ pa_process_spooled_messages_if_required(void)
if (fileset_state == FS_SERIALIZE_IN_PROGRESS)
{
pa_lock_stream(MyParallelShared->xid, AccessShareLock);
- pa_unlock_stream(MyParallelShared->xid, AccessShareLock);
fileset_state = pa_get_fileset_state();
+ pa_unlock_stream(MyParallelShared->xid, AccessShareLock);
}/*
Looking closer at the comment above this code and other part of the file, it seems the order is intentional.
Please disregard my email about `pa_process_spooled_messages_if_required`.
RE: releasing ParallelApplyTxnHash when pa_launch_parallel_worker returns NULL
From
"houzj.fnst@fujitsu.com"
Date:
On Wednesday, January 11, 2023 1:25 AM Ted Yu <yuzhihong@gmail.com> wrote: > I was reading src/backend/replication/logical/applyparallelworker.c . > In `pa_allocate_worker`, when pa_launch_parallel_worker returns NULL, I think the `ParallelApplyTxnHash` should be released. Thanks for reporting. ParallelApplyTxnHash is used to cache the state of streaming transactions being applied. There could be multiple streaming transactions being applied in parallel and their information were already saved in ParallelApplyTxnHash, so we should not release them just because we don't have a worker available to handle a new transaction here. Best Regards, Hou zj
On Wednesday, January 11, 2023 1:25 AM Ted Yu <yuzhihong@gmail.com> wrote:
> I was reading src/backend/replication/logical/applyparallelworker.c .
> In `pa_allocate_worker`, when pa_launch_parallel_worker returns NULL, I think the `ParallelApplyTxnHash` should be released.
Thanks for reporting.
ParallelApplyTxnHash is used to cache the state of streaming transactions being
applied. There could be multiple streaming transactions being applied in
parallel and their information were already saved in ParallelApplyTxnHash, so
we should not release them just because we don't have a worker available to
handle a new transaction here.
Best Regards,
Hou zj
Hi,
if (!ParallelApplyTxnHash)
I think it would be better if `ParallelApplyTxnHash` is created by the first successful parallel apply worker.
Please take a look at the new patch and see if it makes sense.
Cheers
Attachment
RE: releasing ParallelApplyTxnHash when pa_launch_parallel_worker returns NULL
From
"houzj.fnst@fujitsu.com"
Date:
On Wednesday, January 11, 2023 10:21 AM Ted Yu <yuzhihong@gmail.com> wrote: > /* First time through, initialize parallel apply worker state hashtable. */ > if (!ParallelApplyTxnHash) > > I think it would be better if `ParallelApplyTxnHash` is created by the first > successful parallel apply worker. Thanks for the suggestion. But I am not sure if it's worth to changing the order here, because It will only optimize the case where user enable parallel apply but never get an available worker which should be rare. And in such a case, it'd be better to increase the number of workers or disable the parallel mode. Best Regards, Hou zj
On Wednesday, January 11, 2023 10:21 AM Ted Yu <yuzhihong@gmail.com> wrote:
> /* First time through, initialize parallel apply worker state hashtable. */
> if (!ParallelApplyTxnHash)
>
> I think it would be better if `ParallelApplyTxnHash` is created by the first
> successful parallel apply worker.
Thanks for the suggestion. But I am not sure if it's worth to changing the
order here, because It will only optimize the case where user enable parallel
apply but never get an available worker which should be rare. And in such a
case, it'd be better to increase the number of workers or disable the parallel mode.
Best Regards,
Hou zj
I think even though the chance is rare, we shouldn't leak resource.
The `ParallelApplyTxnHash` shouldn't be created if there is no single apply worker.
Re: releasing ParallelApplyTxnHash when pa_launch_parallel_worker returns NULL
From
Amit Kapila
Date:
On Wed, Jan 11, 2023 at 9:31 AM Ted Yu <yuzhihong@gmail.com> wrote: > > On Tue, Jan 10, 2023 at 7:55 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote: >> >> On Wednesday, January 11, 2023 10:21 AM Ted Yu <yuzhihong@gmail.com> wrote: >> > /* First time through, initialize parallel apply worker state hashtable. */ >> > if (!ParallelApplyTxnHash) >> > >> > I think it would be better if `ParallelApplyTxnHash` is created by the first >> > successful parallel apply worker. >> >> Thanks for the suggestion. But I am not sure if it's worth to changing the >> order here, because It will only optimize the case where user enable parallel >> apply but never get an available worker which should be rare. And in such a >> case, it'd be better to increase the number of workers or disable the parallel mode. >> > > > I think even though the chance is rare, we shouldn't leak resource. > But that is true iff we are never able to start the worker. Anyway, I think it is probably fine either way but we can change it as per your suggestion to make it more robust and probably for the code clarity sake. I'll push this tomorrow unless someone thinks otherwise. -- With Regards, Amit Kapila.
On Wed, Jan 11, 2023 at 6:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
On Wed, Jan 11, 2023 at 9:31 AM Ted Yu <yuzhihong@gmail.com> wrote:
>
> On Tue, Jan 10, 2023 at 7:55 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:
>>
>> On Wednesday, January 11, 2023 10:21 AM Ted Yu <yuzhihong@gmail.com> wrote:
>> > /* First time through, initialize parallel apply worker state hashtable. */
>> > if (!ParallelApplyTxnHash)
>> >
>> > I think it would be better if `ParallelApplyTxnHash` is created by the first
>> > successful parallel apply worker.
>>
>> Thanks for the suggestion. But I am not sure if it's worth to changing the
>> order here, because It will only optimize the case where user enable parallel
>> apply but never get an available worker which should be rare. And in such a
>> case, it'd be better to increase the number of workers or disable the parallel mode.
>>
>
>
> I think even though the chance is rare, we shouldn't leak resource.
>
But that is true iff we are never able to start the worker. Anyway, I
think it is probably fine either way but we can change it as per your
suggestion to make it more robust and probably for the code clarity
sake. I'll push this tomorrow unless someone thinks otherwise.
--
With Regards,
Amit Kapila.
Thanks Amit for the confirmation.
Cheers
Re: releasing ParallelApplyTxnHash when pa_launch_parallel_worker returns NULL
From
Amit Kapila
Date:
On Thu, Jan 12, 2023 at 8:25 AM Ted Yu <yuzhihong@gmail.com> wrote: > On Wed, Jan 11, 2023 at 6:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote: >> >> On Wed, Jan 11, 2023 at 9:31 AM Ted Yu <yuzhihong@gmail.com> wrote: >> > >> > On Tue, Jan 10, 2023 at 7:55 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote: >> >> >> >> On Wednesday, January 11, 2023 10:21 AM Ted Yu <yuzhihong@gmail.com> wrote: >> >> > /* First time through, initialize parallel apply worker state hashtable. */ >> >> > if (!ParallelApplyTxnHash) >> >> > >> >> > I think it would be better if `ParallelApplyTxnHash` is created by the first >> >> > successful parallel apply worker. >> >> >> >> Thanks for the suggestion. But I am not sure if it's worth to changing the >> >> order here, because It will only optimize the case where user enable parallel >> >> apply but never get an available worker which should be rare. And in such a >> >> case, it'd be better to increase the number of workers or disable the parallel mode. >> >> >> > >> > >> > I think even though the chance is rare, we shouldn't leak resource. >> > >> >> But that is true iff we are never able to start the worker. Anyway, I >> think it is probably fine either way but we can change it as per your >> suggestion to make it more robust and probably for the code clarity >> sake. I'll push this tomorrow unless someone thinks otherwise. >> Pushed. -- With Regards, Amit Kapila.