Re: Stampede of the JIT compilers - Mailing list pgsql-hackers
From | James Coleman |
---|---|
Subject | Re: Stampede of the JIT compilers |
Date | |
Msg-id | CAAaqYe-mU-KepwZma5cf=8-uW+LkNOqiXYzozQEjdi-1_o+phw@mail.gmail.com Whole thread Raw |
In response to | Re: Stampede of the JIT compilers (Tomas Vondra <tomas.vondra@enterprisedb.com>) |
Responses |
Re: Stampede of the JIT compilers
|
List | pgsql-hackers |
On Sat, Jun 24, 2023 at 7:40 AM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote: > > > > On 6/24/23 02:33, David Rowley wrote: > > On Sat, 24 Jun 2023 at 02:28, James Coleman <jtc331@gmail.com> wrote: > >> There are a couple of issues here. I'm sure it's been discussed > >> before, and it's not the point of my thread, but I can't help but note > >> that the default value of jit_above_cost of 100000 seems absurdly low. > >> On good hardware like we have even well-planned queries with costs > >> well above that won't be taking as long as JIT compilation does. > > > > It would be good to know your evidence for thinking it's too low. It's definitely possible that I stated this much more emphatically than I should have -- it was coming out of my frustration with this situation after all. I think, though, that my later comments here will provide some philosophical justification for it. > > The main problem I see with it is that the costing does not account > > for how many expressions will be compiled. It's quite different to > > compile JIT expressions for a query to a single table with a simple > > WHERE clause vs some query with many joins which scans a partitioned > > table with 1000 partitions, for example. > > > > I think it's both - as explained by James, there are queries with much > higher cost, but the JIT compilation takes much more than just running > the query without JIT. So the idea that 100k difference is clearly not > sufficient to make up for the extra JIT compilation cost. > > But it's true that's because the JIT costing is very crude, and there's > little effort to account for how expensive the compilation will be (say, > how many expressions, ...). > > IMHO there's no "good" default that wouldn't hurt an awful lot of cases. > > There's also a lot of bias - people are unlikely to notice/report cases > when the JIT (including costing) works fine. But they sure are annoyed > when it makes the wrong choice. > > >> But on the topic of the thread: I'd like to know if anyone has ever > >> considered implemented a GUC/feature like > >> "max_concurrent_jit_compilations" to cap the number of backends that > >> may be compiling a query at any given point so that we avoid an > >> optimization from running amok and consuming all of a servers > >> resources? > > > > Why do the number of backends matter? JIT compilation consumes the > > same CPU resources that the JIT compilation is meant to save. If the > > JIT compilation in your query happened to be a net win rather than a > > net loss in terms of CPU usage, then why would > > max_concurrent_jit_compilations be useful? It would just restrict us > > on what we could save. This idea just covers up the fact that the JIT > > costing is disconnected from reality. It's a bit like trying to tune > > your radio with the volume control. > > > > Yeah, I don't quite get this point either. If JIT for a given query > helps (i.e. makes execution shorter), it'd be harmful to restrict the > maximum number of concurrent compilations. It we just disable JIT after > some threshold is reached, that'd make queries longer and just made the > pileup worse. My thought process here is that given the poor modeling of JIT costing you've both described that we're likely to estimate the cost of "easy" JIT compilation acceptably well but also likely to estimate "complex" JIT compilation far lower than actual cost. Another way of saying this is that our range of JIT compilation costs may well be fine on the bottom end but clamped on the high end, and that means that our failure modes will tend towards the worst mis-costings being the most painful (e.g., 2s compilation time for a 100ms query). This is even more the case in an OLTP system where the majority of queries are already known to be quite fast. In that context capping the number of backends compiling, particularly where plans (and JIT?) might be cached, could well save us (depending on workload). That being said, I could imagine an alternative approach solving a similar problem -- a way of exiting early from compilation if it takes longer than we expect. > If it doesn't help for a given query, we shouldn't be doing it at all. > But that should be based on better costing, not some threshold. > > In practice there'll be a mix of queries where JIT does/doesn't help, > and this threshold would just arbitrarily (and quite unpredictably) > enable/disable costing, making it yet harder to investigate slow queries > (as if we didn't have enough trouble with that already). > > > I think the JIT costs would be better taking into account how useful > > each expression will be to JIT compile. There were some ideas thrown > > around in [1]. > > > > +1 to that That does sound like an improvement. One thing about our JIT that is different from e.g. browser JS engine JITing is that we don't substitute in the JIT code "on the fly" while execution is already underway. That'd be another, albeit quite difficult, way to solve these issues. Regards, James Coleman
pgsql-hackers by date: