2010/2/11 Tom Lane <tgl@sss.pgh.pa.us>:
> Robert Haas <robertmhaas@gmail.com> writes:
>> On Thu, Feb 11, 2010 at 7:48 AM, Bart Samwel <bart@samwel.tk> wrote:
>>> Because that's the
>>> underlying assumption of the "ratio" criterion -- that re-planning with
>>> filled-in parameters takes about as much time as the initial planning run
>>> took.
>
>> We only want to replan when replanning is relatively cheap compared to
>> execution,
>
> Well, no, consider the situation where planning takes 50 ms, the generic
> plan costs 100ms to execute, but a parameter-specific plan would take 1ms
> to execute. Planning is very expensive compared to execution but it's
> still a win to do it.
>
> The problem that we face is that we don't have any very good way to tell
> whether a fresh planning attempt is likely to yield a plan significantly
> better than the generic plan. I can think of some heuristics --- for
> example if the query contains LIKE with a parameterized pattern or a
> partitioned table --- but that doesn't seem like a particularly nice
> road to travel.
>
> A possible scheme is to try it and keep track of whether we ever
> actually do get a better plan. If, after N attempts, none of the custom
> plans were ever more than X% cheaper than the generic one, then give up
> and stop attempting to produce custom plans. Tuning the variables might
> be challenging though.
I afraid so every heuristic is bad. Problem is identification of bad
generic plan. And nobody ensure, so non generic plan will be better
than generic. Still I thing we need some way for lazy prepared
statements - plan is generated everytime with known parameters.
Other idea: some special debug/test mod, where pg store generic plan
for every prepared statement, and still generate specific plan. When
the prices are different, then pg produces a warning. This can be
slower, but can identify problematic queries. It could be implemented
as contrib module - some like autoexplain.
regards
Pavel
>
> regards, tom lane
>