Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions - Mailing list pgsql-hackers

From Dilip Kumar
Subject Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions
Date
Msg-id CAFiTN-s2ipEQw3yDejFSt7HZ5he4AfrNZRVwTGF2pShJSWS5sQ@mail.gmail.com
Whole thread Raw
In response to Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions  (Amit Kapila <amit.kapila16@gmail.com>)
Responses Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions
List pgsql-hackers
On Thu, Aug 6, 2020 at 2:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
>
> On Wed, Aug 5, 2020 at 7:37 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:
> >
> > On Wed, Aug 5, 2020 at 6:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
> > >
> >
> > > Can we add a test for incomplete changes (probably with toast
> > > insertion but we can do it for spec_insert case as well) in
> > > ReorderBuffer such that it needs to first serialize the changes and
> > > then stream it?  I have manually verified such scenarios but it is
> > > good to have the test for the same.
> >
> > I have added a new test for the same in the stream.sql file.
> >
>
> Thanks, I have slightly changed the test so that we can consume DDL
> changes separately.  I have made a number of other adjustments like
> changing few more comments (to make them consistent with nearby
> comments), removed unnecessary inclusion of header file, ran pgindent.
> The next patch (v47-0001-Implement-streaming-mode-in-ReorderBuffer) in
> this series looks good to me.  I am planning to push it after one more
> read-through unless you or anyone else has any comments on the same.
> The patch I am talking about has the following functionality:
>
> Implement streaming mode in ReorderBuffer. Instead of serializing the
> transaction to disk after reaching the logical_decoding_work_mem limit
> in memory, we consume the changes we have in memory and invoke stream
> API methods added by commit 45fdc9738b. However, sometimes if we have
> incomplete toast or speculative insert we spill to the disk because we
> can't stream till we have the complete tuple.  And, as soon as we get
> the complete tuple we stream the transaction including the serialized
> changes. Now that we can stream in-progress transactions, the
> concurrent aborts may cause failures when the output plugin consults
> catalogs (both system and user-defined). We handle such failures by
> returning ERRCODE_TRANSACTION_ROLLBACK sqlerrcode from system table
> scan APIs to the backend or WALSender decoding a specific uncommitted
> transaction. The decoding logic on the receipt of such a sqlerrcode
> aborts the decoding of the current transaction and continues with the
> decoding of other transactions. We also provide a new option via SQL
> APIs to fetch the changes being streamed.
>
> This patch's functionality can be independently verified by SQL APIs

Your changes look fine to me.

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com



pgsql-hackers by date:

Previous
From: 曾文旌
Date:
Subject: Re: [Proposal] Global temporary tables
Next
From: "k.jamison@fujitsu.com"
Date:
Subject: RE: [Patch] Optimize dropping of relation buffers using dlist