On Fri, Jun 6, 2008 at 12:30 AM, Greg Smith <gsmith@gregsmith.com> wrote:
> vacuum_cost_delay = 750
> autovacuum = true
> autovacuum_naptime = 3600
> autovacuum_vacuum_threshold = 1000
> autovacuum_analyze_threshold = 500
> autovacuum_vacuum_scale_factor = 0.4
> autovacuum_analyze_scale_factor = 0.2
> autovacuum_vacuum_cost_delay = -1
> autovacuum_vacuum_cost_limit = -1
> max_fsm_pages = 5000000
> max_fsm_relations = 2000
These are terrible settings for a busy database. A cost delay
anything over 10 or 20 is usually WAY too big, and will make vacuums
take nearly forever. Naptime of 3600 is 1 hour, right? That's also
far too long to be napping between just checking to see if you should
run another vacuum.
I'd recommend:
vacuum_cost_delay = 20
autovacuum = true
autovacuum_naptime = 300 # 5 minutes.
Note that I'm used to 8.2 where such settings are in more easily
readable settings like 5min. So if 3600 is in some other unit, I
could be wrong here.
> Now, when I was on the phone about this system, I recall hearing that
> they've fallen into that ugly trap where they are forced to reload this
> database altogether regularly to get performance to stay at a reasonable
> level. That's usually a vacuum problem, and yet another reason to upgrade
> to 8.3 so you get the improved autovacuum there. Vacuum tuning isn't really
> my bag, and I'm out of time here tonight; anybody else want to make some
> suggestions on what might be changed here based on what I've shared about
> the system?
It may well be that their best option is to manually vacuum certain
tables more often (i.e. the ones that bloat). you can write a script
that connects, sets vacuum_cost_delay to something higher, like 20 or
30, and then run the vacuum by hand. Such a vacuum may need to be run
in an almost continuous loop if the update rate is high enough.
I agree with what you said earlier, the biggest mistake here is
running a db on a RAID-5 array.