On Fri, Oct 26, 2012 at 5:08 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> So the bottom line is that this is a case where you need a lot of
> resolution in the histogram. I'm not sure there's anything good
> we can do to avoid that. I spent a bit of time thinking about whether
> we could use n_distinct to get some idea of how many duplicates there
> might be for the endpoint value, but n_distinct is unreliable enough
> that I can't develop a lot of faith in such a thing. Or we could just
> arbitarily assume some fraction-of-a-histogram-bin's worth of
> duplicates, but that would make the results worse for some people.
I looked at this a bit. It seems to me that the root of this issue is
that we aren't distinguishing (at least, not as far as I can see)
between > and >=. ISTM that if the operator is >, we're doing exactly
the right thing, but if it's >=, we're giving exactly the same
estimate that we would give for >. That doesn't seem right.
Worse, I suspect that in this case we're actually giving a smaller
estimate for >= than we would for =, because = would estimate as if we
were searching for an arbitrary non-MCV, while >= acts like > and
says, hey, there's nothing beyond the end.
Shouldn't there be a separate estimator for scalarlesel? Or should
the existing estimator be adjusted to handle the two cases
differently?
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company