Some other things that are important:
How much is the data in transition (updates/deletes/inserts)? If the
data is mostly static or static you can add many special case indexes
with little penalty. The biggest cost of indexes (besides disk space
consumed) is in the slowdown of inserts, updates, and deletes. If the
data hardly changes, you can throw on index after index with little
cost. But if the data is in huge flux, you will have to be careful
about performance targets for each index you add.
This stuff may prove to be of great value:
http://www.postgresql.org/docs/8.0/interactive/monitoring-stats.html
I would also run EXPLAIN against every distinct sort of query you plan
to execute (unless it is for ad-hoc reporting in which case such a
requirement cannot be met).