On Sun, 26 Jul 2020 at 02:23, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>
> Andres Freund <andres@anarazel.de> writes:
> > On 2020-07-24 18:37:02 -0400, Tom Lane wrote:
> >> Yeah. I'm fairly convinced that the v12 defaults are far too low,
> >> because we are constantly seeing complaints of this sort.
>
> > I think the issue is more that we need to take into accoutn that the
> > overhead of JITing scales ~linearly with the number of JITed
> > expressions. And that's not done right now. I've had a patch somewhere
> > that had a prototype implementation of changing the costing to be
> > #expressions * some_cost, and I think that's a lot more accurate.
>
> Another thing we could try with much less effort is scaling it by the
> number of relations in the query. There's already some code in the
> plancache that tries to estimate planning effort that way, IIRC.
> Such a scaling would be very legitimate for the cost of compiling
> tuple-deconstruction code, and for other expressions it'd kind of
> amount to an assumption that the expressions-per-table ratio is
> roughly constant. If you don't like that, maybe some simple
> nonlinear growth rule would work.
I had imagined something a bit less all or nothing. I had thought
that the planner could pretty cheaply choose if jit should occur or
not on a per-Expr level. For WHERE clause items we know "norm_selec"
and we know what baserestrictinfos come before this RestrictInfo, so
we could estimate the number of executions per item in the WHERE
clause. For Exprs in the targetlist we have the estimated rows from
the RelOptInfo. HAVING clause Exprs will be evaluated a similar number
of times. The planner could do something along the lines of
assuming, say 1000 * cpu_operator_cost to compile an Expr then assume
that a compiled Expr will be some percentage faster than an evaluated
one and only jit when the Expr is likely to be evaluated enough times
for it to be an overall win. Optimize and inline would just have
higher thresholds.
David