We vacuum only a few of our tables nightly, this one is the last one
because it takes longer to run. I'll probably re-index it soon, but I
would appreciate any advice on how to speed up the vacuum process (and
the db in general).
Okay, here's our system:
postgres 8.1.4
Linux version 2.4.21
Red Hat Linux 3.2.3
8 GB ram
Intel(R) Xeon(TM) CPU 3.20GHz
Raid 5
autovacuum=off
serves as the application server and database server
server is co-located in another city, hardware upgrade is not
currently an option
Here's the table information:
The table has 140,000 rows, 130 columns (mostly NUMERIC), 60 indexes. It
is probably our 'key' table in the database and gets called by almost
every query (usually joined to others). The table gets updated only
about 10 times a day. We were running autovacuum but it interfered with
the updates to we shut it off. We vacuum this table nightly, and it
currently takes about 12 hours to vacuum it. Not much else is running
during this period, nothing that should affect the table.
Here are the current non-default postgresql.conf settings:
max_connections = 100
shared_buffers = 50000
work_mem = 9192
maintenance_work_mem = 786432
max_fsm_pages = 70000
vacuum_cost_delay = 200
vacuum_cost_limit = 100
bgwriter_delay = 10000
fsync = on
checkpoint_segments = 64
checkpoint_timeout = 1800
effective_cache_size = 270000
random_page_cost = 2
log_destination = 'stderr'
redirect_stderr = on
client_min_messages = warning
log_min_messages = warning
stats_start_collector = off
stats_command_string = on
stats_block_level = on
stats_row_level = on
autovacuum = off
autovacuum_vacuum_threshold = 2000
deadlock_timeout = 10000
max_locks_per_transaction = 640
add_missing_from = on
As I mentioned, any insights into changing the configuration to optimize
performance are most welcome.
Thanks
Ron