sequential scan performance - Mailing list pgsql-performance

From Michael Engelhart
Subject sequential scan performance
Date
Msg-id D520F8B3-20D6-4272-A6D6-8B690871DE73@mac.com
Whole thread Raw
Responses Re: sequential scan performance
Re: sequential scan performance
Re: sequential scan performance
Re: sequential scan performance
List pgsql-performance
Hi -

I have a table of about 3 million rows of city "aliases" that I need
to query using LIKE - for example:

select * from city_alias where city_name like '%FRANCISCO'


When I do an EXPLAIN ANALYZE on the above query, the result is:

  Seq Scan on city_alias  (cost=0.00..59282.31 rows=2 width=42)
(actual time=73.369..3330.281 rows=407 loops=1)
    Filter: ((name)::text ~~ '%FRANCISCO'::text)
Total runtime: 3330.524 ms
(3 rows)


this is a query that our system needs to do a LOT.   Is there any way
to improve the performance on this either with changes to our query
or by configuring the database deployment?   We have an index on
city_name but when using the % operator on the front of the query
string postgresql can't use the index .

Thanks for any help.

Mike

pgsql-performance by date:

Previous
From: PFC
Date:
Subject: Re: OID vs overall system performances on high load
Next
From: "Steinar H. Gunderson"
Date:
Subject: Re: sequential scan performance