PG seems to be choosing a sub-optimal query plan. It's doing a
sequential scan of a 120000-tuple table, instead of an index scan for
the 16 matching rows. Running PG 7.0.2:
fastfacts=> vacuum analyze event;
VACUUM
fastfacts=> explain select type from event where type = 'IPOETC_EVENT';
NOTICE: QUERY PLAN:
Seq Scan on event (cost=0.00..6664.25 rows=6224 width=12)
EXPLAIN
fastfacts=> select count(*) from event where type = 'IPOETC_EVENT';
count
-------
16
(1 row)
fastfacts=> \d event_type_key
Index "event_type_key"
Attribute | Type
-----------+------
type | text
btree
fastfacts=> select count(*) from event;
count
--------
126580
(1 row)
I know that PG is frequently smarter than I am, but this doesn't seem
like a case where it's made a good decision. What am I missing?
Chris
--
----------------------------------------------------- chris@mt.sri.com
Chris Jones SRI International, Inc.