Re: What about utility to calculate planner cost constants? - Mailing list pgsql-performance

From Greg Stark
Subject Re: What about utility to calculate planner cost constants?
Date
Msg-id 87y8cfbqlf.fsf@stark.xeocode.com
Whole thread Raw
In response to Re: What about utility to calculate planner cost constants?  (Richard Huxton <dev@archonet.com>)
Responses Re: What about utility to calculate planner cost constants?  (Richard Huxton <dev@archonet.com>)
List pgsql-performance
Richard Huxton <dev@archonet.com> writes:

> You'd only need to log them if they diverged from expected anyway. That should
> result in fairly low activity pretty quickly (or we're wasting our time).
> Should they go to the stats collector rather than logs?

I think you need to log them all. Otherwise when you go to analyze the numbers
and come up with ideal values you're going to be basing your optimization on a
skewed subset.

I don't know whether the stats collector or the logs is better suited to this.

> > (Also, currently explain analyze has overhead that makes this impractical.
> > Ideally it could subtract out its overhead so the solutions would be accurate
> > enough to be useful)
>
> Don't we only need the top-level figures though? There's no need to record
> timings for each stage, just work completed.

I guess you only need top level values. But you also might want some flag if
the row counts for any node were far off. In that case perhaps you would want
to discard the data point.

--
greg

pgsql-performance by date:

Previous
From: Rick Jansen
Date:
Subject: Re: Tsearch2 performance on big database
Next
From: "Joshua D. Drake"
Date:
Subject: Re: Planner issue