Re: Not using index - Mailing list pgsql-general

From Bas Scheffers
Subject Re: Not using index
Date
Msg-id 2826.217.205.40.94.1076607087.squirrel@io.scheffers.net
Whole thread Raw
In response to Re: Not using index  ("scott.marlowe" <scott.marlowe@ihs.com>)
List pgsql-general
scott.marlowe said:
> Yes.  drop cpu_tuple_index_cost by a factor of 100 or so
No effect.

> Also up effective_cache_size.  It's measured in 8k blocks, so for a
That's better, set to 9000, which seems reasonable for my current setup,
it will start using the index when RANDOM_PAGE_COST <= 1.5.

> Note that rather than "set enable_seqscan=off" for the whole database, you
> can always set it for just this session / query.
Considering how rare a case it is that a table scan is more efficient than
using proper indexes, that might not be a bad idea.

> When you run explain analyze <query> are any of the estimates of rows way
> off versus the real number of rows?  If so, you may need to analyze more
They are actualy depending on what stage it is in, it is execting a factor
20 to 100 rows more than actualy are returned. That sounds way off to me.

Here's what's happening: first there is the index scan, which would return
about 5000 rows (the planner is expecting 3700). But it doesn't return
that, as there is another filter happening (circle ~ point) which reduces
the actual number of rows to 242. That number is then further reduced to
32 by a tsearch2 query, but the planner is still expecting 3700 rows by
that stage.

I tried upping the statistics for the columns I am searching on and
running analyze on the table, but without results.

So I guess I am stuck with setting the effective_cache_size to a sane
value and lowering the random_page_cost value to something not much higher
than 1. Hey, as long as it works!

Thanks,
Bas.

pgsql-general by date:

Previous
From: Eric Ridge
Date:
Subject: Re: ps output and postgres
Next
From: Carlos Ojea Castro
Date:
Subject: Connect to PostgreSQL with kylix3