>so we need to optimize the cost model for such case, the method is the
>patch I mentioned above.
Making the planner more robust w.r.t. to estimation errors is nice, but
I wouldn't go as far saying we should optimize for such cases. The stats
can be arbitrarily off, so should we expect the error to be 10%, 100% or
1000000%?
I don't think my patch relay on anything like that. My patch doesn't fix the
statistics issue, just adding the extra cost on qual cost on Index Filter part.
Assume the query pattern are where col1= X and col2 = Y. The impacts are :
1). Make the cost of (col1, other_column) is higher than (col1, col2)
2). The relationship between seqscan and index scan on index (col1, other_column)
is changed, (this is something I don't want). However my cost difference between
index scan & seq scan usually very huge, so the change above should has
nearly no impact on that choice. 3). Make the cost higher index scan for
Index (col1) only. Overall I think nothing will make thing worse.
We'd probably end up with plans that handle worst cases well,
but the average performance would end up being way worse :-(
That's possible, that's why I hope to get some feedback on that. Actually I
can't think out such case. can you have anything like that in mind?
----
I'm feeling that (qpqual_cost.per_tuple * 1.001) is not good enough since user
may have some where expensive_func(col1) = X. we may change it
cpu_tuple_cost + qpqual_cost.per_tuple + (0.0001) * list_lenght(qpquals).