On Mon, 2010-08-02 at 14:23 -0700, Peter Hussey wrote:
> I already had effective_cache_size set to 500MB.
>
> I experimented with lowering random_page_cost to 3 then 2.
In case of fully cached database it is closer to 1.
> 2) Why is the setting of work_mem something left to the admin and/or
> developer? Couldn't the optimizer say how much it thinks it needs to
> build a hash table based on size of the keys and estimated number of
> rows?
Yes, It can say how much it thinks it needs to build a hash table, the
part it can't figure out is how much it can afford, based on things like
number concurrent queries and how much work-mem these are using, and any
work-mem used will be substracted from total memory pool, affecting also
how much of the files the system caches.
> It is difficult for a software development platform like ours to take
> advantage of suggestions to set work_mem, or to change the cost
> function, or turn on/off join strategies for individual queries. The
> SQL we issue is formed by user interaction with the product and rarely
> static. How would we know when to turn something on or off? That's
> why I'm looking for a configuration solution that I can set on a
> database-wide basis and have it work well for all queries.
Keep trying. The close you get with your conf to real conditions, the
better choices the optimiser can make ;)
--
Hannu Krosing http://www.2ndQuadrant.com
PostgreSQL Scalability and Availability
Services, Consulting and Training