Hi,
it seems to me that the optimizer parameters (like random_page_cost
etc.) could easily be calculated and adjusted dynamically be the DB
backend based on the planner's cost estimates and actual run times for
different queries. Perhaps the developers could comment on that?
I'm not sure how the parameters are used internally (apart from whatever
"EXPLAIN" shows), but if cpu_operator_cost is the same for all
operators, this should probably also be adjusted for individual
operators (I suppose that ">" is not as costly as "~*").
As far as the static configuration is concerned, I'd be interested in
other users' parameters and hardware configurations. Here's ours (for a
write-intensive db that also performs many queries with regular
expression matching):
effective_cache_size = 1000000 # typically 8KB each
#random_page_cost = 0.2 # units are one sequential page fetch cost
random_page_cost = 3 # units are one sequential page fetch cost
#cpu_tuple_cost = 0.01 # (same)
cpu_index_tuple_cost = 0.01 # (same) 0.1
#cpu_operator_cost = 0.0025 # (same)
cpu_operator_cost = 0.025 # (same)
other options:
shared_buffers = 240000 # 2*max_connections, min 16, typically 8KB each
max_fsm_relations = 10000 # min 10, fsm is free space map, ~40 bytes
max_fsm_pages = 10000000 # min 1000, fsm is free space map, ~6 bytes
#max_locks_per_transaction = 20 # min 10
wal_buffers = 128 # min 4, typically 8KB each
sort_mem = 800000 # min 64, size in KB
vacuum_mem = 100000 # min 1024, size in KB
checkpoint_segments = 80 # in logfile segments, min 1, 16MB each
checkpoint_timeout = 300 # range 30-3600, in seconds
commit_delay = 100000 # range 0-100000, in microseconds
commit_siblings = 5 # range 1-1000
12GB RAM, dual 2,80GHz Xeon, 6x 10K rpm disks in a RAID-5, Linux 2.4.23
with HT enabled.
Regards,
Marinos