Re: Increase Query Speed - Mailing list pgsql-general

From Alban Hertroys
Subject Re: Increase Query Speed
Date
Msg-id F2740559-086D-47CE-8962-66A69A720F8B@solfertje.student.utwente.nl
Whole thread Raw
In response to Re: Increase Query Speed  (Jamie Kahgee <jamie.kahgee@gmail.com>)
List pgsql-general
On 27 Jul 2010, at 21:48, Jamie Kahgee wrote:

> EXPLAIN ANALYZE SELECT page, count(page) as impressions FROM campaign_impressions WHERE campaign = 42 and "timestamp"
BETWEEN'2010-05-21 00:00:00' AND '2010-07-27 00:00:00' group by page order by impressions; 
>                                                                                    QUERY PLAN
                                                           
>
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>  Sort  (cost=106059.36..106059.40 rows=16 width=4) (actual time=2209.808..2209.816 rows=109 loops=1)
>    Sort Key: (count(page))
>    Sort Method:  quicksort  Memory: 30kB
>    ->  HashAggregate  (cost=106058.84..106059.04 rows=16 width=4) (actual time=2209.749..2209.765 rows=109 loops=1)

Looks fine up to here.

>          ->  Bitmap Heap Scan on campaign_impressions  (cost=19372.78..102534.06 rows=704956 width=4) (actual
time=424.023..1980.987rows=1010896 loops=1) 
>                Recheck Cond: (campaign = 42)
>                Filter: (("timestamp" >= '2010-05-21 00:00:00'::timestamp without time zone) AND ("timestamp" <=
'2010-07-2700:00:00'::timestamp without time zone)) 

Here's your problem. There are about a million rows matching these criteria.

Now, a million rows of width 4 in 2 seconds is (if I interpret row width correctly) about 2MB/s, so that's possibly not
toppingyour I/O subsystem. That probably means that those rows are all over the table-file, which means Postgres needs
tofetch them through random disk I/O. 

It would probably help to cluster that table on the campaign_impressions_timestamp_idx index. At least most of the rows
willthen be in chronological order, so disk I/O would be much more efficient (if it isn't already in that order, of
course!).

Another possibility would be to create a summary table that sums up the count of pages by day, so that you would only
needto query for the sum of relatively few records. The summary table can then be kept up to date by triggers or
something- that's up to you. 

>                ->  Bitmap Index Scan on campaign_impressions_campaign_idx  (cost=0.00..19196.54 rows=1039330 width=0)
(actualtime=421.587..421.587 rows=1044475 loops=1) 
>                      Index Cond: (campaign = 42)

Nothing wrong here either.

>  Total runtime: 2209.869 ms
> (10 rows)

Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,4c4f43dc286213192919587!



pgsql-general by date:

Previous
From: "Tim Landscheidt"
Date:
Subject: Re: How to distribute quantity if same product is in multiple rows
Next
From: "Leif Gunnar Erlandsen"
Date:
Subject: alter table set tablespace