On Tue, Jan 19, 2016 at 4:20 AM, Thomas Kellerer <spam_eater@gmx.net> wrote: > > Robert Haas wrote: > > This isn't the first complaint about this mechanism that we've gotten, > > and it won't be the last. Way too many of our users are way more > > aware than they should be that the threshold here is five rather than > > any other number, which to me is a clear-cut sign that this needs to > > be improved. How to improve it is a harder question. We lack the > > ability to do any kind of sensitivity analysis on a plan, so we can't > > know whether there are other parameter values that would have resulted > > in a different plan, nor can we test whether a particular set of > > parameter values would have changed the outcome. > > (I initially posted that question on the JDBC mailing list) > > To be honest: looking at the efforts Oracle has done since 9 up until 12 I > am not sure this is a problem that can be solved by caching plans. > > Even with the new "in-flight" re-planning in Oracle 12 ("cardinality > feedback") and all the effort that goes into caching plans we are still > seeing similar problems with (prepared) statements that are suddenly slow. > And as far as I can tell, the infrastructure around plan caching, > invalidation, bind variable peeking and all that seems to be a *lot* more > complex ("sophisticated") in Oracle compared to Postgres. And the results > don't seem to justify the effort (at least in my experience). >
I have heard the same feedback as above some time back from some
of the research fellows doing research in query optimization area. They
come-up with different concept called "Plan Bouquet" [1] where
in they try to execute multiple plans during execution and proceed with
the best-among those or something like that to address bad
plan-selection problems and their claim is that this technique proves to
be better on benchmarks than existing mechanisms used for query
optimisation.
I am not advocating any such mechanism, but rather sharing an