Re: Planning time in explain/explain analyze - Mailing list pgsql-hackers

From Stephen Frost
Subject Re: Planning time in explain/explain analyze
Date
Msg-id 20140113200659.GR2686@tamriel.snowman.net
Whole thread Raw
In response to Re: Planning time in explain/explain analyze  (Robert Haas <robertmhaas@gmail.com>)
Responses Re: Planning time in explain/explain analyze
List pgsql-hackers
* Robert Haas (robertmhaas@gmail.com) wrote:
> Currently the patch includes changes to prepare.c which is what seems
> odd to me.  I think it'd be fine to say, hey, I can't give you the
> planning time in this EXPLAIN ANALYZE because I just used a cached
> plan and did not re-plan.  But saying, hey, the planning time is
> $TINYVALUE, when what we really mean is that looking up the
> previously-cached plan took only that long, seems actively misleading
> to me.

My thought, at least, was to always grab the planning time and then
provide it for explain and/or explain analyze, and then for re-plan
cases, indicate if a cached plan was returned, if a replan happened, and
if a replan happened, what the old plan time and the new plan time was.

I don't think it makes any sense to report on the time returned from
pulling a previously-cached plan.

I understand that it's not completely free to track the plan time for
every query but I'm in the camp that says "we need better metrics and
information for 99% of what we do" and I'd like to see us eventually
able to track average plan time (maybe on a per-query basis..), average
run-time, how many times we do a hashjoin, mergejoin, the number of
records in/out of each, memory usage, etc, etc..  I don't think we need
per-tuple timing information.  I certainly wouldn't want to try and
collect all of this through shared memory or our existing stats
collector.
Thanks,
    Stephen

pgsql-hackers by date:

Previous
From: Robert Haas
Date:
Subject: Re: Planning time in explain/explain analyze
Next
From: Robert Haas
Date:
Subject: Re: Linux kernel impact on PostgreSQL performance