On 5/7/07,
Tom Lane <
tgl@sss.pgh.pa.us> wrote:
Andrew Sullivan <ajs@crankycanuck.ca> writes:
> On Sun, May 06, 2007 at 01:45:54PM -0500, Aaron Bono wrote:
>> So my conclusion is that the function is being treated as volatile even
>> though it is stable because the number of records is small.
> I don't think that's the issue. If this is dependent on the
> number of records, then for some reason the way the data is
> structured means that the planner thinks a seqscan's a better bet.
> This is probably due to distribution of the values. You could try
> increasing the stats sample, and see if that helps.
It's got nothing to do with distribution, just with numbers of pages to
fetch. You'll nearly always get a seqscan plan if there are only a
couple of pages in the table, simply because it would take more I/O to
read the index too.
The reason this is a problem in this example is that the function is so
expensive to execute. The planner should be avoiding the seqscan on the
basis of CPU cost not I/O cost, but it doesn't know that the function is
expensive enough to drive the decision that way.
In CVS HEAD (8.3-to-be) we've added a "cost" property to functions,
which provides a clean way to fix this issue, but there's no good way to
deal with it in existing releases :-(
regards, tom lane
Since we don't delete records, we just deactivate them, I added 100 dummy records that are not active. This fixed the problem. As the number of records in that table grows, I will delete the dummy records.
Thanks for all the help!
Aaron
--
==================================================================
Aaron Bono
Aranya Software Technologies, Inc.
http://www.aranya.com http://codeelixir.com==================================================================