On 2/5/2012 20:34, Tom Lane wrote:
> On reflection I think that the idea of clamping ndistinct beforehand is
> just wrong, and what we ought to do instead is apply a multiplier to the
> selectivity estimate afterwards. In the case of a base rel we could
> just multiply by the selectivity of its baserestrictinfo list. For join
> rels it's a bit harder to guess how much a given input relation might
> have been decimated, but if the join's estimated size is smaller than
> the output size of the base rel the correlation var came from, we could
> multiply by that ratio (on top of whatever correction came from the base
> rel's restriction clauses).
I got stuck in some cases where (due to a tree of filters) the planner
underestimates the JOIN just because the ndistinct conveys a huge number
to the selectivity estimation formula. However, the estimation of both
input relations is made correctly and is limited.
I've tried to understand the logic through commits 0d3b231eebf,
97930cf578e and 7f3eba30c9d. But it is still not clear.
So, why the idea of clamping ndistinct is terrible in general? Could you
explain your reasons a bit more?
--
regards,
Andrey Lepikhov
Postgres Professional