Re: Is this a planner bug? - Mailing list pgsql-general

From Albe Laurenz
Subject Re: Is this a planner bug?
Date
Msg-id A737B7A37273E048B164557ADEF4A58B17CF2B4B@ntex2010i.host.magwien.gv.at
Whole thread Raw
In response to Re: Is this a planner bug?  (Torsten Förtsch <torsten.foertsch@gmx.net>)
Responses Re: Is this a planner bug?  (Torsten Förtsch <torsten.foertsch@gmx.net>)
List pgsql-general
> Torsten Förtsch wrote:
>>> I got this plan:
>>>
>>> Limit  (cost=0.00..1.12 rows=1 width=0)
>>>    ->  Seq Scan on fmb  (cost=0.00..6964734.35 rows=6237993 width=0)
>>>          Filter: ...
>>>
>>> The table has ~80,000,000 rows. So, the filter, according to the plan,
>>> filters out >90% of the rows. Although the cost for the first row to
>>> come out of the seqscan might be 0, the cost for the first row to pass
>>> the filter and, hence, to hit the limit node is probably higher.

>> what is your effective_cache_size in postgresql.conf?
>>
>> What is random_page_cost and seq_page_cost?

> 8GB, 4, 1

Could you run EXPLAIN ANALYZE for the query with enable_seqscan
on and off?  I'd be curious
a) if the index can be used
b) if it can be used, if that is actually cheaper
c) how the planner estimates compare with reality.

Yours,
Laurenz Albe

pgsql-general by date:

Previous
From: Sim Zacks
Date:
Subject: importing downloaded data
Next
From: Tom Lane
Date:
Subject: Re: Is this a planner bug?