> On Sep 15, 2020, at 3:41 PM, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:
>
>
>
> On 2020/09/15 13:41, Bharath Rupireddy wrote:
>> On Tue, Sep 15, 2020 at 9:27 AM Li Japin <japinli@hotmail.com> wrote:
>>>
>>> For now, postgres use single process to send, receive and replay the WAL when we use stream replication,
>>> is there any point to parallelize this process? If it does, how do we start?
>>>
>>> Any thoughts?
>
> Probably this is another parallelism than what you're thinking,
> but I was thinking to start up walwriter process in the standby server
> and make it fsync the streamed WAL data. This means that we leave
> a part of tasks of walreceiver process to walwriter. Walreceiver
> performs WAL receive and write, and walwriter performs WAL flush,
> in parallel. I'm just expecting that this change would improve
> the replication performance, e.g., reduce the time to wait for
> sync replication.
>
> Without this change (i.e., originally), only walreceiver performs
> WAL receive, write and flush. So wihle walreceiver is fsyncing WAL data,
> it cannot receive newly-arrived WAL data. If WAL flush takes a time,
> which means that the time to wait for sync replication in the primary
> would be enlarged.
>
> Regards,
>
> --
> Fujii Masao
> Advanced Computing Technology Center
> Research and Development Headquarters
> NTT DATA CORPORATION
Yeah, this might be a direction.
Now I am thinking about how to parallel WAL log replay. If we can improve the efficiency
of replay, then we can shorten the database recovery time (streaming replication or database
crash recovery).
For streaming replication, we may need to improve the transmission of WAL logs to improve
the entire recovery process.
I’m not sure if this is correct.
Regards,
Japin Li.