Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions - Mailing list pgsql-hackers

From Dilip Kumar
Subject Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions
Date
Msg-id CAFiTN-s+tKU0amysEyj0p=LMR7ooJYZ3-p7O-kJF7rsB9uTHjw@mail.gmail.com
Whole thread Raw
In response to Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions  (Amit Kapila <amit.kapila16@gmail.com>)
List pgsql-hackers
On Sat, Jan 4, 2020 at 4:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
>
> On Mon, Dec 30, 2019 at 3:11 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:
> >
> > On Thu, Dec 12, 2019 at 9:44 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:

> > > > 0002-Issue-individual-invalidations-with-wal_level-log
> > > > ----------------------------------------------------------------------------
> > > > 1.
> > > > xact_desc_invalidations(StringInfo buf,
> > > > {
> > > > ..
> > > > + else if (msg->id == SHAREDINVALSNAPSHOT_ID)
> > > > + appendStringInfo(buf, " snapshot %u", msg->sn.relId);
> > > >
> > > > You have removed logging for the above cache but forgot to remove its
> > > > reference from one of the places.  Also, I think you need to add a
> > > > comment somewhere in inval.c to say why you are writing for WAL for
> > > > some types of invalidations and not for others?
> > Done
> >
>
> I don't see any new comments as asked by me.
Done

 I think we should also
> consider WAL logging at each command end instead of doing piecemeal as
> discussed in another email [1], which will have lesser code changes
> and maybe better in performance.  You might want to evaluate the
> performance of both approaches.

Still pending, will work on this.
>
> > > > 0005-Gracefully-handle-concurrent-aborts-of-uncommitte
> > > > ----------------------------------------------------------------------------------
> > > > 1.
> > > > @@ -1877,6 +1877,7 @@ ReorderBufferCommit(ReorderBuffer *rb, TransactionId xid,
> > > >   PG_CATCH();
> > > >   {
> > > >   /* TODO: Encapsulate cleanup
> > > > from the PG_TRY and PG_CATCH blocks */
> > > > +
> > > >   if (iterstate)
> > > >   ReorderBufferIterTXNFinish(rb, iterstate);
> > > >
> > > > Spurious line change.
> > > >
> > Done
>
> + /*
> + * We don't expect direct calls to heap_getnext with valid
> + * CheckXidAlive for regular tables. Track that below.
> + */
> + if (unlikely(TransactionIdIsValid(CheckXidAlive) &&
> + !(IsCatalogRelation(scan->rs_base.rs_rd) ||
> +   RelationIsUsedAsCatalogTable(scan->rs_base.rs_rd))))
> + elog(ERROR, "improper heap_getnext call");
>
> Earlier, I thought we don't need to check if it is a regular table in
> this check, but it is required because output plugins can try to do
> that and if they do so during decoding (with historic snapshots), the
> same should be not allowed.
>
> How about changing the error message to "unexpected heap_getnext call
> during logical decoding" or something like that?
Done
>
> > > > 2. The commit message of this patch refers to Prepared transactions.
> > > > I think that needs to be changed.
> > > >
> > > > 0006-Implement-streaming-mode-in-ReorderBuffer
> > > > -------------------------------------------------------------------------
>
> Few comments on v4-0018-Review-comment-fix-and-refactoring:
> 1.
> + if (streaming)
> + {
> + /*
> + * Set the last last of the stream as the final lsn before calling
> + * stream stop.
> + */
> + txn->final_lsn = prev_lsn;
> + rb->stream_stop(rb, txn);
> + }
>
> Shouldn't we try to final_lsn as is done by Vignesh's patch [2]?
Already agreed upon current implementation
>
> 2.
> + if (streaming)
> + {
> + /*
> + * Set the CheckXidAlive to the current (sub)xid for which this
> + * change belongs to so that we can detect the abort while we are
> + * decoding.
> + */
> + CheckXidAlive = change->txn->xid;
> +
> + /* Increment the stream count. */
> + streamed++;
> + }
>
> Is the variable 'streamed' used anywhere?
Removed
>
> 3.
> + /*
> + * Destroy the (relfilenode, ctid) hashtable, so that we don't leak
> + * any memory. We could also keep the hash table and update it with
> + * new ctid values, but this seems simpler and good enough for now.
> + */
> + ReorderBufferDestroyTupleCidHash(rb, txn);
>
> Won't this be required only when we are streaming changes?
Fixed
>
> As per my understanding apart from the above comments, the known
> pending work for this patchset is as follows:
> a. The two open items agreed to you in the email [3].
> b. Complete the handling of schema_sent as discussed above [4].
> c. Few comments by Vignesh and the response on the same by me [5][6].
> d. WAL overhead and performance testing for additional WAL logging by
> this patchset.
> e. Some way to see the tuple for streamed transactions by decoding API
> as speculated by you [7].
>
> Have I missed anything?
I have worked upon most of these items, I will reply to them separately.

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com



pgsql-hackers by date:

Previous
From: Dilip Kumar
Date:
Subject: Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions
Next
From: Michael Paquier
Date:
Subject: Re: [Logical Replication] TRAP:FailedAssertion("rel->rd_rel->relreplident == REPLICA_IDENTITY_DEFAULT ||rel->rd_rel->relreplident == REPLICA_IDENTITY_FULL ||rel->rd_rel->relreplident == REPLICA_IDENTITY_INDEX"