On 10/4/07, Ben <bench@silentmedia.com> wrote:
> If I have this:
>
> create table foo (bar int primary key);
>
> ...then in my ideal world, Postgres would be able to use that index on bar
> to help me with this:
>
> select bar from foo order by bar desc limit 20;
>
> But in my experience, PG8.2 is doing a full table scan on foo, then
> sorting it, then doing the limit. I have a more complex primary key, but I
> was hoping the same concept would still apply. Am I doing something wrong,
> or just expecting something that doesn't exist?
pg uses an intelligent planner. It looks at the table, the number of
rows, the distribution of values, and makes a decision whether to use
seq scan or index. Do you have any evidence that in your case seq
scan is a bad choice?
try this experiment:
psql mydb
=# select * from foo; -- this will prime the db and put the table in
memory if it will fit
=# \timing
=# set enable_seqscan=off;
=# select bar from foo order by bar desc limit 20;
=# set enable_seqscan=on;
=# select bar from foo order by bar desc limit 20;
and compare the times each takes. run each way a few times to be sure
you're not getting random variance.
On my reporting db with somewhere around 75 million tables, a similar
query 0.894 mS and uses an index scan. Which is good, because a
sequential scan on that table takes about 15 to 30 minutes.