> Torsten Förtsch wrote:
>>> I got this plan:
>>>
>>> Limit (cost=0.00..1.12 rows=1 width=0)
>>> -> Seq Scan on fmb (cost=0.00..6964734.35 rows=6237993 width=0)
>>> Filter: ...
>>>
>>> The table has ~80,000,000 rows. So, the filter, according to the plan,
>>> filters out >90% of the rows. Although the cost for the first row to
>>> come out of the seqscan might be 0, the cost for the first row to pass
>>> the filter and, hence, to hit the limit node is probably higher.
>> what is your effective_cache_size in postgresql.conf?
>>
>> What is random_page_cost and seq_page_cost?
> 8GB, 4, 1
Could you run EXPLAIN ANALYZE for the query with enable_seqscan
on and off? I'd be curious
a) if the index can be used
b) if it can be used, if that is actually cheaper
c) how the planner estimates compare with reality.
Yours,
Laurenz Albe