I would say that the optimiser has calculated that it would be quicker to sequentially read thought the table to get the 354 rows returned without the limit. By introducing the limit, it is much faster to pick out the first 10 rows using the index. Using an index is usually only quicker when a small percentage of the table like 5% is going to be returned, when more than 5-8% of the rows are to be returned the optimiser will generally go for a sequential scan.
Now I did the same query, but it is going for *index-only scan* only after I put *limit*
localdb=# explain analyse verbose select uid from mm where uid>100 order by uid; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------- Sort (cost=30.99..31.87 rows=354 width=8) (actual time=0.243..0.294 rows=354 loops=1) Output: uid Sort Key: mm.uid Sort Method: quicksort Memory: 41kB -> Seq Scan on public.mm (cost=0.00..16.00 rows=354 width=8) (actual time=0.010..0.123 rows=354 loops=1) Execution time: 0.342 ms (10 rows)
localdb=# explain analyse verbose select uid from mm where uid>100 order by uid *limit 10*; QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=0.27..2.13 rows=10 width=8) (actual time=0.026..0.037 rows=10 loops=1) Output: uid -> Index Only Scan using mm_pkey on public.mm (cost=0.27..65.91 rows=354 width=8) (actual time=0.025..0.034 rows=10 loops=1) Output: uid Index Cond: (mm.uid > 100) Heap Fetches: 10 Planning time: 0.096 ms Execution time: 0.059 ms (8 rows)