On 29 May 2002, Wei Weng wrote:
> One question is: It appears to be using Sequential scan on tables no
> matter how big it is. I tried to set enable_seqscan='false' and then
> vacuum analyze, but the result is even worse.
>
> Isn't index scan supposed to be faster/better than sequential scan for
> large tables? One table (contentsummaries) has 11000 entries and another
Not necessarily. If the values in the table are in random pages (ie, no
real coordination between index key value and position in table) after a
point you end up reading most of the pages of the table anyway, plus
you're reading the index (and the smaller the individual row, the more
likely this is to occur as well I'd guess)
You might want to consider trying other join orders since you're using
explicit join syntax since postgres will take the order you're doing your
joins as the order you want the joins to be done in. And are you sure
you need the distinct, that's probably resulting in the sort and unique
step.
For other things, what are your postgresql.conf settings for shared memory
and sort memory?