On Sat, Dec 10, 2011 at 9:28 PM, Craig Ringer
<ringerc@ringerc.id.au> wrote:
One thing I think would be interesting for this would be to identify slow queries (without doing detailed plan timing) and flag them for more detailed timing if they're run again within <x> time. I suspect this would only be practical with parameterised prepared statements where the query string remained the same, but that'd still be interesting - essentially automatically upgrading the log level for problem queries from slow query logging to auto_explain with explain analyse.
I'll suggest a different take. How about adding a feature where the system flags queries that are taking longer than the optimizer expects? The optimizer must be coming up with some kind of cost number that it uses to rank query plans. If Postgres is using significantly (1) more than the expected cost when executing the query, then that's a sign that something is wrong (statistics wrong, cost ratios out of whack, etc).
I could see a future where Postgres could either alert a DBA to the issue, or try to take corrective action on it's own (queue up a table or index for a statistics update). Maybe the next time a table is probed, let Postgres collect the distribution statistics as a side effect of the query. I'm not aware of any database engine that does that today.
I wish I had the programming chops to take a swing at this....
(1) the definition of "significantly" is left as an exercise for the configuration file :)
--
e-Mail is the equivalent of a postcard written in pencil. This message may not have been sent by me, or intended for you. It may have been read or even modified while in transit. e-Mail disclaimers have the same force in law as a note passed in study hall. If your corporate attorney says that you need an disclaimer in your signature, you need a new corporate attorney.