> When I do an EXPLAIN ANALYZE on the above query, the result is:
>
> Seq Scan on city_alias (cost=0.00..59282.31 rows=2 width=42) (actual
> time=73.369..3330.281 rows=407 loops=1)
> Filter: ((name)::text ~~ '%FRANCISCO'::text)
> Total runtime: 3330.524 ms
> (3 rows)
>
>
> this is a query that our system needs to do a LOT. Is there any way
> to improve the performance on this either with changes to our query or
> by configuring the database deployment? We have an index on city_name
> but when using the % operator on the front of the query string
> postgresql can't use the index .
Of course not. There really is now way to make your literal query above
fast. You could try making a functional index on the reverse() of the
string and querying for the reverse() of 'francisco'.
Or, if you want a general full text index, you should absolutely be
using contrib/tsearch2.
Chris