I'm not sure how well this works in Postgres. For Informix, we could trick
the optimizer into using an index with something like:
Select colname from table where colname !=NULL.
Specifically mentioning the column in the query was the trick. In esql/C,
there were return parameters that then told you how many rows were found.
>CREATE INDEX st_v_state_idx ON state_tst USING btree (v_state);
>CREATE INDEX st_f_state_idx ON state_tst USING btree (f_state);
>
>Load the table using a copy from ...
>
>vacuum verbose analyze state_tst;
>
>Total rows: 14309241
>
>Queries using either f_state = or v_state = explain (and appear to
>execute) using a sequential scan. Resulting in 60 - 80 second query
>times.
>
>Can I force the use of an index? Or do I have something wrong? Any
>ideas?
>
>pg_test=# explain select count(*) from state_tst where f_state = 'PA';
>NOTICE: QUERY PLAN:
>
>Aggregate (cost=277899.65..277899.65 rows=1 width=0)
> -> Seq Scan on state_tst (cost=0.00..277550.51 rows=139654
> width=0)
>
>EXPLAIN
>
>---------------------------(end of broadcast)---------------------------
>TIP 4: Don't 'kill -9' the postmaster