Re: Slow first query despite LIMIT and OFFSET clause - Mailing list pgsql-general

From Alban Hertroys
Subject Re: Slow first query despite LIMIT and OFFSET clause
Date
Msg-id 984DA324-F956-498D-94B9-B2DA33F1C5A4@solfertje.student.utwente.nl
Whole thread Raw
In response to Slow first query despite LIMIT and OFFSET clause  (Phoenix Kiula <phoenix.kiula@gmail.com>)
Responses Re: Slow first query despite LIMIT and OFFSET clause  (Phoenix Kiula <phoenix.kiula@gmail.com>)
List pgsql-general
On Jan 26, 2009, at 4:41 AM, Phoenix Kiula wrote:

> Appreciate any thoughts.
>
> My query is:
>
>
> explain analyze SELECT
>            testimonials.url
>            ,testimonials.alias
>            ,testimonials.aliasEntered
>            ,testimonials.title
>            ,testimonials.modify_date
>            ,testimonials.id
>            ,visitcount.visit_count
>            ,visitcount.unique_count
>            ,visitcount.modify_date
>            ,coalesce(  extract(epoch from now()) -  extract(epoch
> from visitcount.modify_date), 0)
>            ,(select count(id) from testimonials WHERE
> testimonials.user_id = 'superman' and testimonials.user_known = 1 and
> testimonials.status = 'Y' ) AS total
>    FROM testimonials
>    LEFT JOIN visitcount ON testimonials.id = visitcount.id
>    WHERE
>             testimonials.user_id = 'superman'
>         and testimonials.user_known = 1
>         and testimonials.status = 'Y'
>    ORDER BY testimonials.modify_date desc
>    OFFSET 0 LIMIT 10
> ;
>
>
>
>
>                                                     QUERY PLAN
>
----------------------------------------------------------------------------------------------------------------------------------------------------
> Limit  (cost=224.68..224.71 rows=10 width=187) (actual
> time=453.429..453.539 rows=10 loops=1)
>   InitPlan
>     ->  Aggregate  (cost=63.52..63.53 rows=1 width=8) (actual
> time=89.268..89.271 rows=1 loops=1)
>           ->  Index Scan using new_idx_userknown on testimonials
> (cost=0.00..63.41 rows=42 width=8) (actual time=0.039..49.968
> rows=10149 loops=1)
>                 Index Cond: ((user_id)::text = 'superman'::text)
>                 Filter: (status = 'Y'::bpchar)
>   ->  Sort  (cost=161.16..161.26 rows=42 width=187) (actual
> time=453.420..453.464 rows=10 loops=1)
>         Sort Key: testimonials.modify_date
>         ->  Nested Loop Left Join  (cost=0.00..160.02 rows=42
> width=187) (actual time=89.384..395.008 rows=10149 loops=1)
>               ->  Index Scan using new_idx_userknown on testimonials
> (cost=0.00..63.41 rows=42 width=171) (actual time=0.061..50.990
> rows=10149 loops=1)
>                     Index Cond: ((user_id)::text = 'superman'::text)
>                     Filter: (status = 'Y'::bpchar)
>               ->  Index Scan using visitcount_pkey1 on visitcount
> (cost=0.00..2.28 rows=1 width=24) (actual time=0.007..0.010 rows=1
> loops=10149)
>                     Index Cond: (testimonials.id = visitcount.id)
> Total runtime: 461.
> 682 ms
> (15 rows)


Does that query plan look any better without the select count(id) from
testimonials?

If so you may be better off keeping track of those counts in a
separate table updated by triggers on the testimonials table. Whether
that really helps depends on how variable your selectors are to
determine those counts. If those counts are generally very low the
benefit will probably be minimal.

Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,497f5466747032672819277!



pgsql-general by date:

Previous
From: justin
Date:
Subject: Re: performance advice needed: join vs explicit subselect
Next
From: Alban Hertroys
Date:
Subject: Re: How to do an UPDATE for all the fields that do NOT break a constraint?