Re: Asynchronous Append on postgres_fdw nodes. - Mailing list pgsql-hackers

From Kyotaro Horiguchi
Subject Re: Asynchronous Append on postgres_fdw nodes.
Date
Msg-id 20201120.155152.1729683975321724791.horikyota.ntt@gmail.com
Whole thread Raw
In response to Re: Asynchronous Append on postgres_fdw nodes.  (Etsuro Fujita <etsuro.fujita@gmail.com>)
Responses Re: Asynchronous Append on postgres_fdw nodes.  (Kyotaro Horiguchi <horikyota.ntt@gmail.com>)
Re: Asynchronous Append on postgres_fdw nodes.  (Etsuro Fujita <etsuro.fujita@gmail.com>)
List pgsql-hackers
Thanks you for the new version.

At Tue, 17 Nov 2020 18:56:02 +0900, Etsuro Fujita <etsuro.fujita@gmail.com> wrote in 
> On Mon, Oct 5, 2020 at 3:35 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:
> > Yes, if there are no objections from you or Thomas or Robert or anyone
> > else, I'll update Robert's patch as such.
> 
> Here is a new version of the patch (as promised in the developer
> unconference in PostgresConf.CN & PGConf.Asia 2020):
> 
> * In Robert's patch [1] (and Horiguchi-san's, which was created based
> on Robert's), ExecAppend() was modified to retrieve tuples from
> async-aware children *before* the tuples will be needed, but I don't

The "retrieve" means the move of a tuple from fdw to executor
(ExecAppend or ExecAsync) layer?

> think that's really a good idea, because the query might complete
> before returning the tuples.  So I modified that function so that a

I'm not sure how it matters. Anyway the fdw holds up to tens of tuples
before the executor actually make requests for them.  The reason for
the early fetching is letting fdw send the next request as early as
possible. (However, I didn't measure the effect of the
nodeAppend-level prefetching.)

> tuple is retrieved from an async-aware child *when* it is needed, like
> Thomas' patch.  I used FDW callback functions proposed by Robert, but
> introduced another FDW callback function ForeignAsyncBegin() for each
> async-aware child to start an asynchronous data fetch at the first
> call to ExecAppend() after ExecInitAppend() or ExecReScanAppend().

Even though the terminology is not officially determined, in the past
discussions "async-aware" meant "can handle async-capable subnodes"
and "async-capable" is used as "can run asynchronously".  Likewise you
seem to have changed the meaning of as_needrequest from "subnodes that
needs to request for the next tuple" to "subnodes that already have
got query-send request and waiting for the result to come".  I would
argue to use the words and variables (names) in such meanings.  (Yeah,
parallel_aware is being used in that meaning, I'm not sure what is the
better wordings for the aware-capable relationship in that case.)

> * For EvalPlanQual, I modified the patch so that async-aware children
> are treated as if they were synchronous when executing EvalPlanQual.

Doesn't async execution accelerate the epq-fetching?  Or does
async-execution goes into trouble in the EPQ path?

> * In Robert's patch, all async-aware children below Append nodes in
> the query waiting for events to occur were managed by a single EState,
> but I modified the patch so that such children are managed by each
> Append node, like Horiguchi-san's patch and Thomas'.

Managing in Estate give advantage for push-up style executor but
managing in node_state is simpler.

> * In Robert's patch, the FDW callback function
> ForeignAsyncConfigureWait() allowed multiple events to be configured,
> but I limited that function to only allow a single event to be
> configured, just for simplicity.

No problem for me.

> * I haven't yet added some planner/resowner changes from Horiguchi-san's patch.
> 
> * I haven't yet done anything about the issue on postgres_fdw's
> handling of concurrent data fetches by multiple ForeignScan nodes
> (below *different* Append nodes in the query) using the same
> connection discussed in [2].  I modified the patch to just disable
> applying this feature to problematic test cases in the postgres_fdw
> regression tests, by a new GUC enable_async_append.
> 
> Comments welcome!  The attached is still WIP and maybe I'm missing
> something, though.
> 
> Best regards,
> Etsuro Fujita
> 
> [1] https://www.postgresql.org/message-id/CA%2BTgmoaXQEt4tZ03FtQhnzeDEMzBck%2BLrni0UWHVVgOTnA6C1w%40mail.gmail.com
> [2] https://www.postgresql.org/message-id/CAPmGK16E1erFV9STg8yokoewY6E-zEJtLzHUJcQx%2B3dyivCT%3DA%40mail.gmail.com

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center



pgsql-hackers by date:

Previous
From: Fujii Masao
Date:
Subject: Re: bad logging around broken restore_command
Next
From: Kyotaro Horiguchi
Date:
Subject: Re: Protect syscache from bloating with negative cache entries