I have a database that has a lot of records (~6mil, iirc), with a varchar
column that often wants to be queried using something like "where acc like
'foo%'". There is a B-Tree index on the acc column. In 7.2.3, Postgres
would use that index to do the queries and things were lightning fast. In
7.3, it is refusing to use the index, even if I set enable_seqscan = off,
meaning that the query that used to take a few msec now takes a few aeons.
I've run vacuum analyze on the whole database, and it doesn't change
anything.
I'm trying to cluster the table on the index (since that's the only way that
particular table is ever queried), so I can't give an explain analyze, but
here's one for another table using the same idea:
Index "public.xfoo"
Column | Type
-------------+------------------------
stringthing | character varying(255)
btree, for table "public.foo"
xxx=> explain analyze select * from foo where stringthing like 'ABCDEF%';
Seq Scan on foo (cost=0.00..148503.29 rows=1 width=111) (actual
time=30512.99..32082.95 rows=4 loops=1)
Filter: (stringthing ~~ 'ABCDEF%'::text)
Total runtime: 32083.07 msec
For reference, there are 4,688,317 rows in this table. Changing the select
* to select stringthing doesn't affect the query plan either.
I can coerce it to do an index scan by making the condition "stringthing >=
'ABCDEF' and stringthing < 'ABCDEG'", in which case it executes nice and
fast:
xxx=> explain analyze select * from foo where stringthing >= 'ABCDEF' and
stringthing < 'ABCDEG';
Index Scan using xfoo on foo (cost=0.00..6.02 rows=1 width=111) (actual
time=0.08..0.08 rows=0 loops=1)
Index Cond: ((stringthing >= 'ABCDEF'::character varying) AND
(stringthing < 'ABCDEG'::character varying))
Total runtime: 0.17 msec
This is an ugly workaround, though :(
Something I noticed in trying to force the use of an index scan ... setting
enable_seqscan = off here doesn't change whether it uses a seq scan, but it
makes it change the cost estimate to '100000000.00..100148503.29'; bit
weird, that, if you ask me.
-Matt