Re: Should we increase the default vacuum_cost_limit? - Mailing list pgsql-hackers

From David Rowley
Subject Re: Should we increase the default vacuum_cost_limit?
Date
Msg-id CAKJS1f9Rg_dms4JsyNWiisS3BseHjhNB7LWfFJtviZMkoTyj7A@mail.gmail.com
Whole thread Raw
In response to Re: Should we increase the default vacuum_cost_limit?  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: Should we increase the default vacuum_cost_limit?
List pgsql-hackers
On Mon, 11 Mar 2019 at 09:58, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> The second patch is a delta that rounds off to the next smaller unit
> if there is one, producing a less noisy result:
>
> regression=# set work_mem = '30.1GB';
> SET
> regression=# show work_mem;
>  work_mem
> ----------
>  30822MB
> (1 row)
>
> I'm not sure if that's a good idea or just overthinking the problem.
> Thoughts?

I don't think you're over thinking it.  I often have to look at such
settings and I'm probably not unique in when I glance at 30822MB I can
see that's roughly 30GB, whereas when I look at 31562138kB, I'm either
counting digits or reaching for a calculator.  This is going to reduce
the time it takes for a human to process the pg_settings output, so I
think it's a good idea.

-- 
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


pgsql-hackers by date:

Previous
From: Rahila Syed
Date:
Subject: Re: monitoring CREATE INDEX [CONCURRENTLY]
Next
From: MikalaiKeida@ibagroup.eu
Date:
Subject: RE: Timeout parameters