Re: Allow to specify (auto-)vacuum cost limits relative to the database/cluster size? - Mailing list pgsql-hackers

From Alvaro Herrera
Subject Re: Allow to specify (auto-)vacuum cost limits relative to the database/cluster size?
Date
Msg-id 20160224165403.GA413518@alvherre.pgsql
Whole thread Raw
In response to Re: Allow to specify (auto-)vacuum cost limits relative to the database/cluster size?  (Joe Conway <mail@joeconway.com>)
Responses Re: Allow to specify (auto-)vacuum cost limits relative to the database/cluster size?
List pgsql-hackers
Joe Conway wrote:

> In my experience it is almost always best to run autovacuum very often
> and very aggressively. That generally means tuning scaling factor and
> thresholds as well, such that there are never more than say 50-100k dead
> rows. Then running vacuum with no delays or limits runs quite fast is is
> generally not noticeable/impactful.
> 
> However that strategy does not work well for vacuums which run long,
> such as an anti-wraparound vacuum. So in my opinion we need to think
> about this as at least two distinct cases requiring different solutions.

With the freeze map there is no need for anti-wraparound vacuums to be
terribly costly, since they don't need to scan the whole table each
time.  That patch probably changes things a lot in this area.

-- 
Álvaro Herrera                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



pgsql-hackers by date:

Previous
From: Teodor Sigaev
Date:
Subject: Re: GIN data corruption bug(s) in 9.6devel
Next
From: "Armor"
Date:
Subject: Fw: Re: get current log file