Re: random_page_cost = 2.0 on Heroku Postgres - Mailing list pgsql-performance

From Peter Geoghegan
Subject Re: random_page_cost = 2.0 on Heroku Postgres
Date
Msg-id CAEYLb_WvD6gyibab7w=tCF4dQ7qD5AQjxGF348gZJM+r=oNhJQ@mail.gmail.com
Whole thread Raw
In response to Re: random_page_cost = 2.0 on Heroku Postgres  (Peter van Hardenberg <pvh@pvh.ca>)
List pgsql-performance
On 12 February 2012 22:28, Peter van Hardenberg <pvh@pvh.ca> wrote:
> Yes, I think if we could normalize, anonymize, and randomly EXPLAIN
> ANALYZE 0.1% of all queries that run on our platform we could look for
> bad choices by the planner. I think the potential here could be quite
> remarkable.

Tom Lane suggested that plans, rather than the query tree, might be a
more appropriate thing for the new pg_stat_statements to be hashing,
as plans should be directly blamed for execution costs. While I don't
think that that's appropriate for normalisation (consider that there'd
often be duplicate pg_stat_statements entries per query), it does seem
like an idea that could be worked into a future revision, to detect
problematic plans. Maybe it could be usefully combined with
auto_explain or something like that (in a revision of auto_explain
that doesn't necessarily explain every plan, and therefore doesn't pay
the considerable overhead of that instrumentation across the board).

--
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

pgsql-performance by date:

Previous
From: Peter van Hardenberg
Date:
Subject: Re: random_page_cost = 2.0 on Heroku Postgres
Next
From: CSS
Date:
Subject: Re: rough benchmarks, sata vs. ssd