On Mon, Jan 09, 2006 at 09:23:38PM -0700, Robert Creager wrote:
> I'm working with a query to get more info out with a join. The base
> query works great speed wise because of index usage. When the join is
> tossed in, the index is no longer used, so the query performance tanks.
The first query you posted returns 285 rows and the second returns
over one million; index usage aside, that difference surely accounts
for a performance penalty. And as is often pointed out, index scans
aren't always faster than sequential scans: the more of a table a
query has to fetch, the more likely a sequential scan will be faster.
Have the tables been vacuumed and analyzed? The planner's estimates
for windspeed are pretty far off, which could be affecting the query
> -> Sort (cost=12997.68..13157.98 rows=64120 width=28) (actual time=2286.155..2286.450 rows=284 loops=1)
> Sort Key: date_part('doy'::text, unmunge_time(windspeed.time_group))
> -> Seq Scan on windspeed (cost=0.00..7878.18 rows=64120 width=28) (actual time=2279.275..2285.271
> Filter: (unmunge_time(time_group) > (now() - '24:00:00'::interval))
That's a small amount of the total query time, however, so although
an index scan might help it probably won't provide the big gain
you're looking for.
Have you done any tests with enable_seqscan disabled? That'll show
whether an index or bitmap scan would be faster. And have you
verified that the join condition is correct? Should the query be
returning over a million rows?