Re: Avoiding bad prepared-statement plans. - Mailing list pgsql-hackers

From Robert Haas
Subject Re: Avoiding bad prepared-statement plans.
Date
Msg-id 603c8f071002210437w43a58131r85bbe7eff90bc266@mail.gmail.com
Whole thread Raw
In response to Re: Avoiding bad prepared-statement plans.  (Jeroen Vermeulen <jtv@xs4all.nl>)
Responses Re: Avoiding bad prepared-statement plans.
List pgsql-hackers
On Wed, Feb 17, 2010 at 5:52 PM, Jeroen Vermeulen <jtv@xs4all.nl> wrote:
> I may have cut this out of my original email for brevity... my impression is
> that the planner's estimate is likely to err on the side of scalability, not
> best-case response time; and that this is more likely to happen than an
> optimistic plan going bad at runtime.

Interestingly, most of the mistakes that I have seen are in the
opposite direction.

> Yeb points out a devil in the details though: the cost estimate is unitless.
>  We'd have to have some orders-of-magnitude notion of how the estimates fit
> into the picture of real performance.

I'm not sure to what extent you can assume that the cost is
proportional to the execution time.  I seem to remember someone
(Peter?) arguing that they're not related by any fixed ratio, partly
because things like page costs vs. cpu costs didn't match physical
reality, and that in fact some attempts to gather better empirically
better values for things like random_page_cost and seq_page_cost
actually ended up making the plans worse rather than better.  It would
be nice to see some research in this area...

...Robert


pgsql-hackers by date:

Previous
From: Robert Haas
Date:
Subject: getting to beta
Next
From: Greg Stark
Date:
Subject: Re: parallelizing subplan execution (was: explain and PARAM_EXEC)