Re: Should we increase the default vacuum_cost_limit? - Mailing list pgsql-hackers

From David Rowley
Subject Re: Should we increase the default vacuum_cost_limit?
Date
Msg-id CAKJS1f9wbS+SzEDUXyMuLCsNgwMH=1Ztj3QE3WxuKdJtbqrOEA@mail.gmail.com
Whole thread Raw
In response to Re: Should we increase the default vacuum_cost_limit?  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: Should we increase the default vacuum_cost_limit?
List pgsql-hackers
On Sat, 9 Mar 2019 at 07:10, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>
> Jeff Janes <jeff.janes@gmail.com> writes:
> > Now that this is done, the default value is only 5x below the hard-coded
> > maximum of 10,000.
> > This seems a bit odd, and not very future-proof.  Especially since the
> > hard-coded maximum appears to have no logic to it anyway, at least none
> > that is documented.  Is it just mindless nannyism?
>
> Hm.  I think the idea was that rather than setting it to "something very
> large", you'd want to just disable the feature via vacuum_cost_delay.
> But I agree that the threshold for what is ridiculously large probably
> ought to be well more than 5x the default, and maybe it is just mindless
> nannyism to have a limit less than what the implementation can handle.

Yeah, +1 to increasing it.  I imagine that the 10,000 limit would not
allow people to explore the upper limits of a modern PCI-E SSD with
the standard delay time and dirty/miss scores.  Also, it doesn't seem
entirely unreasonable that someone somewhere might also want to
fine-tune the hit/miss/dirty scores so that they're some larger factor
apart from each other the standard scores are. The 10,000 limit does
not allow much wiggle room for that.

-- 
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


pgsql-hackers by date:

Previous
From: Magnus Hagander
Date:
Subject: Re: Checksum errors in pg_stat_database
Next
From: Michael Paquier
Date:
Subject: Re: Tighten error control for OpenTransientFile/CloseTransientFile