Tom Lane wrote:
> One idea I thought about was to sort by index scan cost, using
> selectivity only as a tiebreaker for cost, rather than the other way
> around as is currently done. This seems fairly plausible because
> indexscans that are cheaper than other indexscans likely return fewer
> rows too, and so selectivity is already accounted for to some extent ---
> at least you can't have an enormously worse selectivity at lower cost,
> whereas Steve's example proves it doesn't work the other way. But I'm
> worried about breaking the reasoning about redundant indexes that's
> mentioned in the comments.
>
> Another alternative that would respond to the immediate problem is to
> maintain the current sort order, but as we come to each index, consider
> using that one alone, and throw away whatever AND we might have built up
> if that one alone beats the AND-so-far. This seems more conservative,
> as it's unlikely to break any cases that work well now, but on the other
> hand it feels like plastering another wart atop a structure that's
> already rather rickety.
>
> Has anyone got any thoughts about the best way to do this?
How about doing both: sort the index by index scan cost; then pick the
first index on the list and start adding indexes when they lower the
cost. When adding each index, consider it by itself against the
already stacked indexes. If the cost is lower, put this index at the
top of the list, and restart the algorithm (after the sorting step of
course).
I think the concern about condition redundancy should be attacked
separately. How about just comparing whether they have common prefixes
of conditions? I admit I don't understand what would happen with
indexes defined like (lower(A), B, C) versus (A, B) for example.
--
Alvaro Herrera http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support