Re: bad JIT decision - Mailing list pgsql-general

From Andres Freund
Subject Re: bad JIT decision
Date
Msg-id 20200728210748.heyfzf7uv6n5ot3k@alap3.anarazel.de
Whole thread Raw
In response to Re: bad JIT decision  (David Rowley <dgrowleyml@gmail.com>)
Responses Re: bad JIT decision
Re: bad JIT decision
List pgsql-general
Hi,

On 2020-07-28 11:54:53 +1200, David Rowley wrote:
> Is there some reason that we can't consider jitting on a more granular
> basis?

There's a substantial "constant" overhead of doing JIT. And that it's
nontrival to determine which parts of the query should be JITed in one
part, and which not.


> To me, it seems wrong to have a jit cost per expression and
> demand that the plan cost > #nexprs * jit_expr_cost before we do jit
> on anything.  It'll make it pretty hard to predict when jit will occur
> and doing things like adding new partitions could suddenly cause jit
> to not enable for some query any more.

I think that's the right answer though:

> ISTM a more granular approach would be better. For example, for the
> expression we expect to evaluate once, there's likely little point in
> jitting it, but for the one on some other relation that has more rows,
> where we expect to evaluate it 1 billion times, there's likely good
> reason to jit that.  Wouldn't it be better to consider it at the
> RangeTblEntry level?

Because this'd still JIT if a query has 10k unconditional partition
accesses with the corresponding accesses, even if they're all just one
row?

(I'm rebasing my tree that tries to reduce the overhead / allow caching
/ increase efficiency to current PG, but it's a fair bit of work)

Greetings,

Andres Freund



pgsql-general by date:

Previous
From: Shaozhong SHI
Date:
Subject: Re: Issues of slow running queries when dealing with Big Data
Next
From: Ken Tanzer
Date:
Subject: Is upper_inc ever true for dateranges?