Hi,
On 2022-03-30 14:30:32 +1300, David Rowley wrote:
> On Wed, 30 Mar 2022 at 13:20, Andres Freund <andres@anarazel.de> wrote:
> > I wonder whether it'd make sense to combine that with awareness of a few plan
> > types that can lead to large portions of child nodes never being executed. One
> > the case where the current behaviour is the worst is runtime partition pruning
> > in append - we compile expressions for whole subtrees that will never be
> > executed. We should be much more hesitant to compile there compared to a
> > cheap-ish node that we know will be executed as part of a large expensive part
> > of the plan tree.
>
> I think that's also a problem but I think that might be better fixed
> another way.
>
> There is a patch [1] around that seems to change things to compile JIT
> on-demand.
That's a bad idea idea to do on a per-function basis. Emitting functions one
by one is considerably slower than doing so in larger chunks. Of course that
can be addressed to some degree by something like what you suggest below.
> I've not looked at the patch but imagine the overhead might be kept minimal
> by initially setting the evalfunc to compile and run, then set it to just
> run the compiled Expr for subsequent executions.
That part I'm not worried about, there's such an indirection on the first call
either way IIRC.
> Maybe nodes below an Append/MergeAppend with run-time pruning could compile
> on-demand and other nodes up-front. Or maybe there's no problem with making
> everything on-demand.
Yea, that could work. The expressions for one "partition query" would still
have to be emitted at once. For each such subtree we should make a separate
costing decision. But I think an additional "will be executed" sub-node is a
different story, the threshold shouldn't be done on a per-node basis. That
partitioning of the plan tree is kind of what I was trying to get at...
Greetings,
Andres Freund