Re: ORDER BY ... LIMIT.. performance - Mailing list pgsql-performance

From Josh Berkus
Subject Re: ORDER BY ... LIMIT.. performance
Date
Msg-id web-2024310@davinci.ethosmedia.com
Whole thread Raw
In response to Re: ORDER BY ... LIMIT.. performance  ("john cartmell" <john.cartmell@mediaburst.co.uk>)
List pgsql-performance
John,

> But when it is 20:
>  EXPLAIN ANALYZE SELECT * FROM tblcompany WHERE
> lower(companyname) like 'a g m%' ORDER BY companyname LIMIT 20,0;
>  NOTICE:  QUERY PLAN:
>
>  Limit  (cost=0.00..4864.92 rows=20 width=223) (actual
> time=559.58..21895.02 rows=20 loops=1)
>    ->  Index Scan using idx_tblcompany_companyname on tblcompany
> (cost=0.00..1542006.83 rows=6339 width=223) (actual
> time=559.57..21894.97 rows=20 loops=1)
>  Total runtime: 21895.13 msec

That's extremely odd.   From the look of it, Postgres is taking an
extra 18 seconds just to find that 20th row.

Does this table expereince very frequent deletions and updates, or
perhaps mass record replacement from a file?   Try running VACUUM FULL
ANALYZE, and possibly even REINDEX on idx_tblcompany_companyname.
   Massive numbers of dead tuples could account for this performance
irregularity.

-Josh

pgsql-performance by date:

Previous
From: Joe Conway
Date:
Subject: Re: Speeding up aggregates
Next
From: Tom Lane
Date:
Subject: Re: ALTER TABLE .. < ADD | DROP > OIDS