Might be worth trying a larger statistics target (say 100), in the hope
that the planner then has better information to work with.
best wishes
Mark
Marc G. Fournier wrote:
>
>he problem is that right now, we look at the LIKE first, giving us ~300k
>rows, and then search through those for those who have the word matching
>... is there some way of reducing the priority of the LIKE part of the
>query, as far as the planner is concerned, so that it will "resolve" the =
>first, and then work the LIKE on the resultant set, instead of the other
>way around? So that the query is only checking 15k records for the 13k
>that match, instead of searching through 300k?
>
>I'm guessing that the reason that the LIKE is taking precidence(sp?) is
>because the URL table has less rows in it then ndict8?
>
>
>
>