I have a query where postgres (7.2.1) seriously overestimates the cost of using an index.
When I do a set enable_seqscan = false; The query goes from:
-> Aggregate (cost=49656.10..49656.10 rows=1 width=12)
-> Merge Join (cost=49062.25..49655.18 rows=367 width=12)
-> Sort (cost=11794.87..11794.87 rows=15220 width=6)
-> Seq Scan on u (cost=0.00..10737.55
rows=15220 width=6)
-> Sort (cost=37267.38..37267.38 rows=136643 width=6)
-> Seq Scan on d (cost=0.00..24391.43
rows=136643 width=6)
--------------------- to -
-> Nested Loop (cost=0.00..102204.91 rows=367 width=12)
-> Index Scan using u_pkey_key on u
(cost=0.00..43167.33 rows=15220 width=6)
-> Index Scan using d_pkey on d (cost=0.00..3.86
rows=1 width=6)
- to -
The first query takes three times as long as the second. Since postgres seems to think
that the nested loop takes so long do I have to lower cpu_operator_cost to get postgres to
use the nested loop?
And does 7.3 have any improvements in this area?