Re: Query Optimizer makes a poor choice - Mailing list pgsql-general

From Marcin Mańk
Subject Re: Query Optimizer makes a poor choice
Date
Msg-id CAK61fk6a6NF9pW4tO0_UJORNGfLf13hHPaFgD71S5XsA5Pa4gw@mail.gmail.com
Whole thread Raw
In response to Query Optimizer makes a poor choice  ("Tyler Hains" <thains@profitpointinc.com>)
Responses Re: Query Optimizer makes a poor choice
List pgsql-general
On Tue, Nov 29, 2011 at 7:21 PM, Tyler Hains <thains@profitpointinc.com> wrote:
> # explain analyze select * from cards where card_set_id=2850 order by
> card_id limit 1;
>                                                                QUERY PLAN
>
-----------------------------------------------------------------------------------------------------------------------------------------
>  Limit  (cost=0.00..105.19 rows=1 width=40) (actual time=6026.947..6026.948
> rows=1 loops=1)
>    ->  Index Scan using cards_pkey on cards  (cost=0.00..2904875.38
> rows=27616 width=40) (actual time=6026.945..6026.945 rows=1 loops=1)
>          Filter: (card_set_id = 2850)
>  Total runtime: 6026.985 ms
> (4 rows)
>

I believe this is the old problem of the planner expecting that the
card_set_id's are randomly distributed over the card_ids . This is not
the case, I guess?

The planner expects to quickly hit a matching record while scanning
the primary key, an there is a nasty surprise.

It seems there is no perfect solution, things You might want to try:
-fooling with random_page_cost/seq_tuple_cost/work_mem
-"order by card_id-1"
-an index on (card_set_id, card_id)

Greetings
Marcin

pgsql-general by date:

Previous
From: Jay Levitt
Date:
Subject: Re: psql -1 with multiple files?
Next
From: Oleg Serov
Date:
Subject: Postgresql + corrupted disk = data loss. (Need help for database recover)