On Sun, 4 Mar 2018 07:49:46 -0800
Jeff Janes <jeff.janes@gmail.com> wrote:
> In any event, I agree with your analysis that ANALYZE should set the number
> of tuples from scratch. After all, it sets the other estimates, such as
> MCV, from scratch, and those are much more fragile to sampling than just
> the raw number of tuples are. But if the default target is set to 1, that
> would scan only 300 pages. I think that that is a little low of a sample
> size to base an estimate on, but it isn't clear to that using 300 pages
> plus whacking them around with an exponential averaging is really going to
> be much better. And if you set your default target to 1, that is
> more-or-less what you signed up for.
>
> It is little weird to have VACUUM incrementally update and then ANALYZE
> compute from scratch and discard the previous value, but no weirder than
> what we currently do of having ANALYZE incrementally update despite that it
> is specifically designed to representatively sample the entire table. So I
> don't think we need to decide what to do about VACUUM before we can do
> something about ANALYZE.
Thanks. I was going to add the point about trusting ANALYZE for the
statistics but not for reltuples, but you beat me to it. 300 samples would be
on the small side, as you say that's asking for it. Even the old default
target of 10 gives 3000 samples which is probably plenty.
I think the method VACUUM uses is appropriate and probably correct for
VACUUM. But not for ANALYZE. Which is actually hinted at in the original
comments but not in the code.
-dg
--
David Gould daveg@sonic.net
If simplicity worked, the world would be overrun with insects.