There's no obvious reason for the previous query to be so slow, unless you've got horrendously slow or overloaded disk hardware. What sort of machine is this anyway, and was it doing any other work at the time?
Granted it is doing other work besides database-stuff, mainly CPU-intensive calculations.
The creation of the (latitude,longitude,validtime,parname) index and moving the database files from a RAID-5 to RAID-10 has decreased the query time to ~4 seconds:
db=# explain analyze select * from tbl_20070601 where validtime between 20070602000000 and 20070602235500 and latitude=60.2744 and longitude=26.4417 and parname in ('temperature'); QUERY PLAN ; ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Index Scan using tbl_20070601_latlonvalidparname_index on tbl_20070601 (cost=0.00..28.46 rows=13 width=137) (actual time=94.52..3743.53 rows=539 loops=1) Index Cond: ((latitude = 60.2744::double precision) AND (longitude = 26.4417::double precision) AND (validtime >= 20070602000000::bigint) AND (validtime <= 20070602235500::bigint) AND (parname = 'temperature'::character varying)) Total runtime: 3744.56 msec (3 rows)
This is already a great improvement compared to the previous 8 seconds. Our app developers claim though that previously the same queries have run in less than 1 second. The database had a mysterious crash a few months ago (some tables lost their content) and the performance has been bad ever since. I don't know the details of this crash since I just inherited the system recently and unfortunately no logfiles are left. Could the crash somehow corrupt catalog files so that the querying gets slower? I know this is a long shot but I don't know what else to think of.