Alvaro Herrera <alvherre@alvh.no-ip.org> writes:
> On 2021-Nov-11, Alvaro Herrera wrote:
>> But what really surprised me is that the the average time to optimize
>> per function is now 2.06ms ... less than half of the previous
>> measurement. It emits 10% less functions than before, but the time to
>> both optimize and emit is reduced by 50%. How does that make sense?
> Ah, here's a query that illustrates what I'm on about. I found this
> query[1] in a blog post[2].
> ...
> Query 1, 148 functions JIT-compiled.
> Average time to optimize, per function 435.153/148 = 2.940ms;
> average time to emit per function 282.216/148 = 1.906ms
> Query 2, 137 functions JIT-compiled.
> Average time to optimize, per function: 374.103/137 = 2.730ms
> Average time to emit, per function 254.557 / 137 = 1.858ms
> Query 3, 126 functions JIT-compiled.
> Average time to optimize per function 229.128 / 126 = 1.181ms
> Average time to emit per function 167.338 / 126 = 1.328ms
Yeah, in combination with your other measurement, it sure does look like
there's something worse-than-linear going on here. The alternative is to
assume that the individual functions are more complex in one query than
the other, and that seems like a bit of a stretch.
You could probably generate some queries with lots and lots of expressions
to characterize this better. If it is O(N^2), it should not be hard to
drive the cost up to the point where the guilty bit of code would stand
out in a perf trace.
regards, tom lane