With the build issues in check, I'm looking at the configuration settings.
I think taking the total cost as the triggering threshold is probably
good enough for a start. The cost modeling can be refined over time.
We should document that both jit_optimize_above_cost and
jit_inline_above_cost require jit_above_cost to be set, or otherwise
nothing happens.
One problem I see is that if someone sets things like
enable_seqscan=off, the artificial cost increase created by those
settings would quite likely bump the query over the jit threshold, which
would alter the query performance characteristics in a way that the user
would not have intended. I don't have an idea how to address this right
now.
I ran some performance assessments:
merge base (0b1d1a038babff4aadf0862c28e7b667f1b12a30)
make installcheck 3.14s user 3.34s system 17% cpu 37.954 total
jit branch default settings
make installcheck 3.17s user 3.30s system 13% cpu 46.596 total
jit_above_cost=0
make installcheck 3.30s user 3.53s system 5% cpu 1:59.89 total
jit_optimize_above_cost=0 (and jit_above_cost=0)
make installcheck 3.44s user 3.76s system 1% cpu 8:12.42 total
jit_inline_above_cost=0 (and jit_above_cost=0)
make installcheck 3.32s user 3.62s system 2% cpu 5:35.58 total
One can see the CPU savings quite nicely.
One obvious problem is that with the default settings, the test suite
run gets about 15% slower. (These figures are reproducible over several
runs.) Is there some debugging stuff turned on that would explain this?
Or would just loading the jit module in each session cause this?
From the other results, we can see that one clearly needs quite a big
database to see a solid benefit from this. Do you have any information
gathered about this so far? Any scripts to create test databases and
test queries?
--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services