Re: Full table scan: 300 million rows - Mailing list pgsql-novice

From Andreas Kretschmer
Subject Re: Full table scan: 300 million rows
Date
Msg-id 20100515072227.GA8052@tux
Whole thread Raw
In response to Full table scan: 300 million rows  (David Jarvis <thangalin@gmail.com>)
List pgsql-novice
David Jarvis <thangalin@gmail.com> wrote:

> Hi,
>
> I have the following query:
>
> Select  avg(d.amount) AS amount,  y.year
> From year_ref y
>     Join month_ref m
>         On m.year_ref_id = y.id
>     Join daily d
>         On d.month_ref_id = m.id
> Where y.year Between 1980 And 2000
>     And m.month = 12
>     And m.category_id = '001'
>     And d.daily_flag_id <> 'M'
>     And exists   (

I think, you have a bad table design: you have splitted a date into
year, month and day, and stored that in different tables.

If i were you, i would use a regular date-field and indexes like:

test=# create table d (d date);
CREATE TABLE
test=*# create index idx_d_year on d(extract (year from d));
CREATE INDEX
test=*# create index idx_d_month on d(extract (month from d));
CREATE INDEX

Your query with this structure:

select ... from table where
  extract(year from d) between 1980 And 2000
  and extract(month from d) = 12
  and daily_flag_id ...

This can use the indexes and avoid the seq-scan. You can also use
table-partitioning and constraint exclusion (for instance, one table per
month)


> http://i.imgur.com/m6YIV.png
>
> I have yet to let this query finish.
>
> Any ideas how I can speed it up?

Do you have an index on daily.daily_flag_id ?


Andreas
--
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.                              (Linus Torvalds)
"If I was god, I would recompile penguin with --enable-fly."   (unknown)
Kaufbach, Saxony, Germany, Europe.              N 51.05082°, E 13.56889°

pgsql-novice by date:

Previous
From: David Jarvis
Date:
Subject: Full table scan: 300 million rows
Next
From: Oliver Kindernay
Date:
Subject: PQescapeStringConn problem