I have two tables in the same database: geoname and test_table.
The geoname table contains many columns which are: name, feature_class, feature_code, admin1, admin2,admin3, name and so on.
The second table 'test_table' contains only the columns: city, state.
There is no join between the two tables and I want to make a match between the data contained in each of them because I need the result for a farther process.
where (lower(g.feature_class) like 'P' or lower(g.feature_class) like 'L')
and lower(g.country_code) like 'US'
and lower(g.admin1) like lower(t.state)
and (lower(g.name) like lower(t.city) or lower(g.name) like lower(t.city || 'city'))
The table geoname contains 370260 rows and the table test_table contains 10270 rows.
The query took a long time to accomplish more than half an hour.Should I add another column in the table test_table which contains the country_code and make an inner join with the geoname table or should I use indexs to accelerate the process?
It would have been more helpful if you had included the actual table structures for both tables. However, I would start by creating separate indexes on lower(feature_class) lower(country_code) lower(admin1) lower(name) lower(city)
That being said, you are better off forcing lowercase on all fields BEFORE inserting into the table. EG:
INSERT INTO test_table VALUES (lower(some_key), lower(name), lower(feature_class), ....)
Then you would would not need to use lower() in the indexes or the query.
Please, in the future, always include your version of PostgreSQL and O/S
--
Melvin Davidson I reserve the right to fantasize. Whether or not you wish to share my fantasy is entirely up to you.