On Sun, Jul 13, 2025 at 6:06 PM Konstantin Knizhnik <knizhnik@garret.ru> wrote:
>
> On 13/07/2025 1:28 pm, Amit Kapila wrote:
> > On Tue, Jul 8, 2025 at 12:06 PM Konstantin Knizhnik <knizhnik@garret.ru> wrote:
> >> There is well known Postgres problem that logical replication subscriber
> >> can not caught-up with publisher just because LR changes are applied by
> >> single worker and at publisher changes are made by
> >> multiple concurrent backends.
> >>
> > BTW, do you know how users deal with this lag? For example, one can
> > imagine creating multiple pub-sub pairs for different sets of tables
> > so that the workload on the subscriber could also be shared by
> > multiple apply workers. I can also think of splitting the workload
> > among multiple pub-sub pairs by using row filters
>
>
> Yes, I saw that users starts several subscriptions/publications to
> receive and apply changes in parallel.
> But it can not be considered as universal solution:
> 1. Not always there are multiple tables (or partitions of one one table)
> so that it it possible to split them between multiple publications.
> 2. It violates transactional behavior (consistency): if transactions
> update several tables included in different publications then applying
> this changes independently, we can observe at replica behaviour when one
> table is update - and another - not. The same is true for row filters.
> 3. Each walsender will have to scan WAL, so having N subscriptions we
> have to read and decode WAL N times.
>
I agree that it is not a solution which can be applied in all cases
and neither I want to say that we shouldn't pursue the idea of
prefetch or parallel apply to improve the speed of apply. It was just
to know/discuss how users try to workaround lag for cases where the
lag is large.
--
With Regards,
Amit Kapila.