Re: detecting poor query plans - Mailing list pgsql-hackers

From Tom Lane
Subject Re: detecting poor query plans
Date
Msg-id 18130.1069884876@sss.pgh.pa.us
Whole thread Raw
In response to Re: detecting poor query plans  (Greg Stark <gsstark@mit.edu>)
Responses Re: detecting poor query plans  (Gavin Sherry <swm@linuxworld.com.au>)
List pgsql-hackers
Greg Stark <gsstark@mit.edu> writes:
> That's a valid point. The ms/cost factor may not be constant over time.
> However I think in the normal case this number will tend towards a fairly
> consistent value across queries and over time. It will be influenced somewhat
> by things like cache contention with other applications though.

I think it would be interesting to collect the numbers over a long
period of time and try to learn something from the averages.  The real
hole in Neil's original suggestion was that it assumed that comparisons
based on just a single query would be meaningful enough to pester the
user about.

> On further thought the real problem is that these numbers are only available
> when running with "explain" on. As shown recently on one of the lists, the
> cost of the repeated gettimeofday calls can be substantial. It's not really
> feasible to suggest running all queries with that profiling.

Yeah.  You could imagine a simplified-stats mode that only collects the
total runtime (two gettimeofday's per query is nothing) and the row
counts (shouldn't be impossibly expensive either, especially if we
merged the needed fields into PlanState instead of requiring a
separately allocated node).  Not sure if that's as useful though.
        regards, tom lane


pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: Limiting factors of pg_largeobject
Next
From: "Joshua D. Drake"
Date:
Subject: Re: Limiting factors of pg_largeobject