Hi chaps,
Our legacy apps have some permanent tables that they use for tempory data and constantly clear out, I've kicked the
developersand I intend to eradicate them eventually (the tables, not the developers).
These tables are constantly being autovacuumed, approximately once a minute, it's not causing any problem and seems to
bekeeping them vacuumed. But I'm constantly re-assessing our autovacuum settings to make sure they're adequate, and no
matterhow much I read up on autovacuum I still feel like I'm missing something.
I just wondered what peoples opinions were on handling this sort of vacuuming? Is that too often?
The general autovaccum settings set more for our central tables are threshold 500, scale_factor 0.2. I guess I could
setspecific settings for the tables in pg_autovacuum, or I could exclude them in there and run a vacuum from cron once
aday or something.
Here's a typical log message:
2008-09-19 11:40:10 BST [12917]: [1-1]: [user=]: [host=]: [db=]:: LOG: automatic vacuum of table
"TEMP.reports.online":index scans: 1
pages: 21 removed, 26 remain
tuples: 2356 removed, 171 remain
system usage: CPU 0.00s/0.00u sec elapsed 0.08 sec
Any comments would be appreciated.