>>> The question is how can we make it faster.
>>If there's just one region ID for any given postal code, you might try
>>adding a column to vehicleused and storing the postal codes there.
>>You could possibly populate that column using a trigger; probably it
>>doesn't change unless the postalcode changes. Then you could index
>>that column and query against it directly, rather than joining to
>>PostalCodeRegionCountyCity. Short of that, I don't see any obvious
>>way to avoid reading most of the vehicleused table. There may or may
>>not be an index that can speed that up slightly and of course you can
>>always throw hardware at the problem, but fundamentally reading half a
>>million or more rows isn't going to be instantaneous.
>>Incidentally, it would probably simplify things to store postal codes
>>in the same case throughout the system. If you can avoid the need to
>>write lower(x) = lower(y) and just write x = y you may get better
>>plans. I'm not sure that's the case in this particular example but
>>it's something to think about.
Something else you might test is bumping the read-ahead value. Most linux
installs have this at 256, might try bumping the value to ~8Meg and tune
from there . this may help you slightly for seq scan performance. As always:
YMMV. It's not going to magically fix low performing I/O subsystems and it
won't help many applications of PG but there are a few outlying instances
where this change can help a little bit.
I am sure someone will step in and tell you it is a bad idea - AND they will
probably have perfectly valid reasons for why it is, so you will need to
consider the ramifications.. if at all possible test and tune to see.
..: Mark
>>--
>>Robert Haas
>>EnterpriseDB: http://www.enterprisedb.com
>>The Enterprise Postgres Company
--
>>Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
>>To make changes to your subscription:
>>http://www.postgresql.org/mailpref/pgsql-performance