Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions - Mailing list pgsql-hackers

From Masahiko Sawada
Subject Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions
Date
Msg-id CA+fd4k5KBRw5O45D32=zR_zgU-fMhD_0iudZwm-MYY1P2bUZ7Q@mail.gmail.com
Whole thread Raw
In response to Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions  (Dilip Kumar <dilipbalaut@gmail.com>)
Responses Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions  (Amit Kapila <amit.kapila16@gmail.com>)
List pgsql-hackers
On Mon, 2 Dec 2019 at 17:32, Dilip Kumar <dilipbalaut@gmail.com> wrote:
>
> On Sun, Dec 1, 2019 at 7:58 AM Michael Paquier <michael@paquier.xyz> wrote:
> >
> > On Fri, Nov 22, 2019 at 01:18:11PM +0530, Dilip Kumar wrote:
> > > I have rebased the patch on the latest head and also fix the issue of
> > > "concurrent abort handling of the (sub)transaction." and attached as
> > > (v1-0013-Extend-handling-of-concurrent-aborts-for-streamin) along with
> > > the complete patch set.  I have added the version number so that we
> > > can track the changes.
> >
> > The patch has rotten a bit and does not apply anymore.  Could you
> > please send a rebased version?  I have moved it to next CF, waiting on
> > author.
>
> I have rebased the patch set on the latest head.

Thank you for working on this.

This might have already been discussed but I have a question about the
changes of logical replication worker. In the current logical
replication there is a problem that the response time are doubled when
using synchronous replication because wal senders send changes after
commit. It's worse especially when a transaction makes a lot of
changes. So I expected this feature to reduce the response time by
sending changes even while the transaction is progressing but it
doesn't seem to be. The logical replication worker writes changes to
temporary files and applies these changes when the worker received
commit record (STREAM COMMIT). Since the worker sends the LSN of
commit record as flush LSN to the publisher after applying all
changes, the publisher must wait for all changes are applied to the
subscriber.  Another problem would be that the worker doesn't receive
changes during applying changes of other transactions. These things
make me think it's better to have a new worker dedicated to apply
changes like we have the wal receiver process and the startup process.
Maybe we can have 2 workers (receiver and applyer) per subscriptions.
Any thoughts?

Regards,


--
Masahiko Sawada            http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



pgsql-hackers by date:

Previous
From: Simon Riggs
Date:
Subject: Re: Optimizing TransactionIdIsCurrentTransactionId()
Next
From: Masahiko Sawada
Date:
Subject: Re: [HACKERS] Block level parallel vacuum