I have postgresql 8.4.15 on Ubuntu 10.04 and this query:
SELECT MAX(probeTable.PROBE_ALARM_EVENT_ID) AS MAX_EVENT_ID FROM ALARM_EVENT eventTable INNER JOIN ALARM_EVENT_PROBE probeTable ON eventTable.ALARM_EVENT_ID = probeTable.ALARM_EVENT_ID WHERE probeTable.PROBE_ID = 2
which is running slower than it could. Table definitions and explain analyze output below. The first explain is the current plan (uses sequential scans). The second is after I have disabled sequential scans, and is the plan I would prefer.
I have vacuum analyzed both tables. In terms of relevant changes to the default postgresql.conf, we have these:
shared_buffers = 28MB constraint_exclusion = on
I want to understand why the optimiser is choosing the plan with sequential table scans, rather than the plan with index scans. I am not sure how to interpret the predicted vs actual times/costs, and want to understand why the predicted cost for the index scan plan seems to be way off.
The planner clamps the estimated number of rows from an index scan at 1 row, even if it actually believes the number will be 0. That makes the logical simpler, avoiding needs to test for division by zero all over the place, and probably makes it more robust to mis-estimation in most use cases. But in this case, that means it thinks it will find 34 rows, one from each partition, which is way too high.
Now, there certainly is some cost to test an index and finding that no rows in it can match. But your query is probably probing the same spot in each index for each negative match, which means all the blocks are already in memory. But PostgreSQL doesn't know that, so even if it didn't do the clamp it would probably still not get the right answer.