Hi.
I've got a slow query I can't seem to fix...
Table = logins
+-------------------------+----------------------------------+-------+
| Field | Type | Length|
+-------------------------+----------------------------------+-------+
| loginid | varchar() | 16 |
| logintime | datetime | 8 |
| ip | varchar() | 15 |
| direction | char() | 1 |
+-------------------------+----------------------------------+-------+
Indices: logins_direction_idx logins_logintime_idx
So I've got this table that records stuff about people logging in. Since
it's essentially a log, it is very large. About 1.3 million tuples.
I've created an index on the logintime with hopes I can make some of my
queries which care only about the last 40 day's of logins use the query
and ignore the rest of the tuples...
explain select * from logins where logintime>'now'::datetime-'40
days'::timespan;
NOTICE: QUERY PLAN:
Seq Scan on logins (cost=5839.78 rows=44958 width=44)
Very bad query plan :(
As I recall, you should be able to make a btree traversal return all its
values in order. So then isn't it reasonable that the dbms should be using
the index to find all the values within a certain range?
-Michael