Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions - Mailing list pgsql-hackers

From Dilip Kumar
Subject Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions
Date
Msg-id CAFiTN-sn-BaLJw9HmbRe+=6Ju7FoMEFftRE=8JzzdoowmX0fcw@mail.gmail.com
Whole thread Raw
In response to Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions  (Amit Kapila <amit.kapila16@gmail.com>)
Responses Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions  (Amit Kapila <amit.kapila16@gmail.com>)
List pgsql-hackers
On Mon, Dec 30, 2019 at 3:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
>
> On Sun, Dec 29, 2019 at 1:34 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:
> >
> > I have observed some more issues
> >
> > 1. Currently, In ReorderBufferCommit, it is always expected that
> > whenever we get REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM, we must
> > have already got REORDER_BUFFER_CHANGE_INTERNAL_SPEC_INSERT and in
> > SPEC_CONFIRM we send the tuple we got in SPECT_INSERT.  But, now those
> > two messages can be in different streams.  So we need to find a way to
> > handle this.  Maybe once we get SPEC_INSERT then we can remember the
> > tuple and then if we get the SPECT_CONFIRM in the next stream we can
> > send that tuple?
> >
>
> Your suggestion makes sense to me.  So, we can try it.

I have implemented this and attached it as a separate patch. In my
latest patch set[1]
>
> > 2. During commit time in DecodeCommit we check whether we need to skip
> > the changes of the transaction or not by calling
> > SnapBuildXactNeedsSkip but since now we support streaming so it's
> > possible that before we decode the commit WAL, we might have already
> > sent the changes to the output plugin even though we could have
> > skipped those changes.  So my question is instead of checking at the
> > commit time can't we check before adding to ReorderBuffer itself
> >
>
> I think if we can do that then the same will be true for current code
> irrespective of this patch.  I think it is possible that we can't take
> that decision while decoding because we haven't assembled a consistent
> snapshot yet.  I think we might be able to do that while we try to
> stream the changes.  I think we need to take care of all the
> conditions during streaming (when the logical_decoding_workmem limit
> is reached) as we do in DecodeCommit.  This needs a bit more study.

I have analyzed this further and I think we can not decide all the
conditions even while streaming.  Because IMHO once we get the
SNAPBUILD_FULL_SNAPSHOT we can add the changes to the reorder buffer
so that if we get the commit of the transaction after we reach to the
SNAPBUILD_CONSISTENT.  However, if we get the commit before we reach
to SNAPBUILD_CONSISTENT then we need to ignore this transaction.  Now,
even if we have SNAPBUILD_FULL_SNAPSHOT we can stream the changes
which might get dropped later but that we can not decide while
streaming.

[1] https://www.postgresql.org/message-id/CAFiTN-snMb%3D53oqkM8av8Lqfxojjm4OBwCNxmFssgLCceY_zgg%40mail.gmail.com

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com



pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: logical decoding : exceeded maxAllocatedDescs for .spill files
Next
From: Michael Paquier
Date:
Subject: Re: remove some STATUS_* symbols