Re: detecting poor query plans - Mailing list pgsql-hackers

From Greg Stark
Subject Re: detecting poor query plans
Date
Msg-id 8765h6x3ia.fsf@stark.dyndns.tv
Whole thread Raw
In response to Re: detecting poor query plans  (Neil Conway <neilc@samurai.com>)
Responses Re: detecting poor query plans  (Neil Conway <neilc@samurai.com>)
Re: detecting poor query plans  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers
Neil Conway <neilc@samurai.com> writes:

> I was thinking about this, but I couldn't think of how to get it to
> work properly:
> 
>      (1) The optimizer's cost metric is somewhat bogus to begin with.
>          ISTM that translating a cost of X into an expected runtime of
>          Y msecs is definitely not trivial to do.

At least for all the possible plans of a given query at a specific point in
time the intention is that the cost be proportional to the execution time. 

> the exact time it takes to produce that result relation depends on a wide
> collection of external factors.

That's a valid point. The ms/cost factor may not be constant over time.
However I think in the normal case this number will tend towards a fairly
consistent value across queries and over time. It will be influenced somewhat
by things like cache contention with other applications though.

On further thought the real problem is that these numbers are only available
when running with "explain" on. As shown recently on one of the lists, the
cost of the repeated gettimeofday calls can be substantial. It's not really
feasible to suggest running all queries with that profiling.

-- 
greg



pgsql-hackers by date:

Previous
From: Neil Conway
Date:
Subject: Re: detecting poor query plans
Next
From: Neil Conway
Date:
Subject: Re: detecting poor query plans