On Tue, Nov 29, 2011 at 11:21 AM, Tyler Hains
<thains@profitpointinc.com> wrote:
> # explain analyze select * from cards where card_set_id=2850 order by
> card_id limit 1;
> QUERY
PLAN
>
------------------------------------------------------------------------
-----------------------------------------------------------------
> Limit (cost=0.00..105.19 rows=1 width=40) (actual
time=6026.947..6026.948
> rows=1 loops=1)
> -> Index Scan using cards_pkey on cards (cost=0.00..2904875.38
> rows=27616 width=40) (actual time=6026.945..6026.945 rows=1 loops=1)
There's a huge disconnect here between what the query planner expects
(27k rows) and how many there are (1). Also, getting a single row
from an index should be faster than this, even if the table and index
are quite large. Have you checked for bloat on this index?
---------------------------------------------------------------------
There are actually more like 27 million rows in the table. That's why it
really should be filtering the rows using the index on the other column
before ordering for the limit.
The documentation does not seem to give a clear reason for changing the
value used in default_statistics_target or why you would override it
with ALTER TABLE SET STATISTICS. My gut is telling me that this may be
our answer if we can figure out how to tweak it.