Re: quickly getting the top N rows - Mailing list pgsql-performance

From Simon Riggs
Subject Re: quickly getting the top N rows
Date
Msg-id 1191529385.4223.231.camel@ebony.site
Whole thread Raw
In response to Re: quickly getting the top N rows  (Ben <bench@silentmedia.com>)
Responses Re: quickly getting the top N rows
List pgsql-performance
On Thu, 2007-10-04 at 12:52 -0700, Ben wrote:

> The original, slow query:
>
> explain analyze SELECT * FROM log WHERE clientkey in
> ('000000004500000000010000000001')  AND premiseskey in
> ('000000004500000000010000000001') and logicaldel = 'N'
> ORDER BY logtime desc, logkey desc, clientkey desc, premiseskey desc LIMIT 20 offset 0;
>
> QUERY PLAN
>
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>   Limit  (cost=356402.58..356402.63 rows=20 width=563) (actual time=215858.481..215858.527 rows=20 loops=1)
>     ->  Sort  (cost=356402.58..357598.25 rows=478267 width=563) (actual time=215858.478..215858.498 rows=20 loops=1)
>           Sort Key: logtime, logkey, clientkey, premiseskey
>           ->  Seq Scan on log  (cost=0.00..52061.67 rows=478267 width=563) (actual time=29.340..100043.313
rows=475669loops=1) 
>                 Filter: ((clientkey = '000000004500000000010000000001'::bpchar) AND (premiseskey =
'000000004500000000010000000001'::bpchar)AND (logicaldel = 'N'::bpchar)) 
>   Total runtime: 262462.582 ms
> (6 rows)
>
>
> Every row in log has identical clientkey and premiseskey values, so if I
> just remove those columns from the order by clause, I get this far
> superior plan:
>
> explain analyze SELECT * FROM log WHERE clientkey in
> ('000000004500000000010000000001') AND premiseskey in
> ('000000004500000000010000000001') and logicaldel = 'N'
> ORDER BY logtime desc, logkey desc LIMIT 20 offset 0;
>                                                                  QUERY PLAN
>
-------------------------------------------------------------------------------------------------------------------------------------------
>   Limit  (cost=0.00..12.33 rows=20 width=563) (actual time=0.047..0.105 rows=20 loops=1)
>     ->  Index Scan Backward using log_ak1 on log  (cost=0.00..294735.70 rows=478267 width=563) (actual
time=0.044..0.076rows=20 loops=1) 
>           Index Cond: ((clientkey = '000000004500000000010000000001'::bpchar) AND (premiseskey =
'000000004500000000010000000001'::bpchar))
>           Filter: (logicaldel = 'N'::bpchar)
>   Total runtime: 0.165 ms
> (5 rows)
>
>
> ...which made me to think that maybe postgres is not using log_ak1 in the
> former case because two of the columns in the order by match every row.
>
> Unfortunately, in this case it's not an option to alter the query. I'm
> just trying to figure out an explaination.

In the first query, Postgres cannot use the index because the sort order
of the index does not match the sort order of the query. When you change
the sort order of the query so that it matches that of the index, then
the index is used.

If you define your index on (logtime, logkey, clientkey, premiseskey)
rather than on (clientkey, premiseskey, logtime, logkey) you will have a
fast query. Yes, the column order matters.

--
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com


pgsql-performance by date:

Previous
From: Ben
Date:
Subject: Re: quickly getting the top N rows
Next
From: "Scott Marlowe"
Date:
Subject: Re: quickly getting the top N rows