Tom Lane wrote:
>
> Good question. I haven't looked at the literature at all, but a first
> thought is that you might be able to do something useful given the
> bounding box of all data in the table ... which is a stat that VACUUM
> does *not* compute, but perhaps could be taught to.
>
> regards, tom lane
i'm still trying to understand how cost estimates & selectivity work.
i'm looking at a page in the programmers guide & i'm trying to figure
out how it works for a normal (btree) index vs. how it would (or should)
work for an rtree. first i have to figure out how it works for a normal
index. it looks like there are two parts: the index cost (both index
startup and cost to access a tuple) and the selectivity (which is
multiplied by the cost to access to get the total cost of using that
index(?)) it makes sense that the real cost would best be estimated
this way, at least in my simplistic world. if i've been twiddling with
the selectivity shouldn't i also be messing with the index costs.
wouldn't you probably get a more effective lowering of the cost that
way?
logically, i would say you're right about the bounding box being a
descent judge of selectivity. theoretically you should be able to see
data distribution by looking at an rtree index which would give even a
better selectivity number. i'm still not sure about the cost thing,
though. would that be something that should be looked into?
jeff