Lance,
> The parameters I would think we should calculate are:
>
> max_connections
>
> shared_buffers
>
> work_mem
>
> maintenance_work_mem
>
> effective_cache_size
>
> random_page_cost
Actually, I'm going to argue against messing with random_page_cost. It's a
cannon being used when a slingshot is called for. Instead (and this was
the reason for the "What kind of CPU?" question) you want to reduce the
cpu_* costs. I generally find that if cpu_* are reduced as appropriate to
modern faster cpus, and effective_cache_size is set appropriately, a
random_page_cost of 3.5 seems to work for appropriate choice of index
scans.
If you check out my spreadsheet version of this:
http://pgfoundry.org/docman/view.php/1000106/84/calcfactors.sxc
... you'll see that the approach I found most effective was to create
profiles for each of the types of db applications, and then adjust the
numbers based on those.
Other things to adjust:
wal_buffers
checkpoint_segments
commit_delay
vacuum_delay
autovacuum
Anyway, do you have a pgfoundry ID? I should add you to the project.
--
--Josh
Josh Berkus
PostgreSQL @ Sun
San Francisco