Re: Use of Functional Indexs and Planner estimates - Mailing list pgsql-performance

From Greg Stark
Subject Re: Use of Functional Indexs and Planner estimates
Date
Msg-id 87n03cvi1e.fsf@stark.xeocode.com
Whole thread Raw
In response to Re: Use of Functional Indexs and Planner estimates  ("Scott Marlowe" <smarlowe@qwest.net>)
List pgsql-performance
"Scott Marlowe" <smarlowe@qwest.net> writes:

> >    ->  Seq Scan on rules
> >          (cost=0.00..22296.32 rows=11294 width=12)
> >          (actual time=540.149..2047.308 rows=1 loops=1)


> Simple, the planner is choosing a sequential scan when it should be
> choosing an index scan.  This is usually because random_page_cost is set
> too high, at the default of 4.  Try settings between 1.2 and 2.x or so
> to see how that helps.  Be sure and test with various queries of your
> own to be sure you've got about the right setting.

Unless you make random_page_cost about .0004 (4/11294) it isn't going to be
costing this query right (That's a joke, don't do it:). It's thinking there
are 11,000 records matching the where clause when in fact there is only 1.

If you know how an upper bound on how many records the query should be finding
you might try a kludge involving putting a LIMIT inside the group by. ie,
something like

select rulename,redirect
  from (select rulename,redirect
          from ...
         where ...
         limit 100) as kludge
 group by rulename,redirect

This would at least tell the planner not to expect more than 100 rows and to
take the plan likely to produce the first 100 rows fastest.

But this has the disadvantage of uglifying your code and introducing an
arbitrary limit. When 7.5 comes out it you'll want to rip this out.

--
greg

pgsql-performance by date:

Previous
From:
Date:
Subject: RamDisk
Next
From: Rod Taylor
Date:
Subject: Re: seq scan woes