Re: vacuum_cost_page_miss default value and modern hardware - Mailing list pgsql-hackers

From Masahiko Sawada
Subject Re: vacuum_cost_page_miss default value and modern hardware
Date
Msg-id CAD21AoCz8xCkSAX194Bb3nSjvNLY06xOG5tG=YqdAw3mjXsXJw@mail.gmail.com
Whole thread Raw
In response to vacuum_cost_page_miss default value and modern hardware  (Peter Geoghegan <pg@bowt.ie>)
Responses Re: vacuum_cost_page_miss default value and modern hardware
Re: vacuum_cost_page_miss default value and modern hardware
List pgsql-hackers
On Mon, Dec 28, 2020 at 5:17 AM Peter Geoghegan <pg@bowt.ie> wrote:
>
> Simply decreasing vacuum_cost_page_dirty seems like a low risk way of
> making the VACUUM costing more useful within autovacuum workers.
> Halving vacuum_cost_page_dirty to 5 would be a good start, though I
> think that a value as low as 2 would be better. That would make it
> only 2x vacuum_cost_page_hit's default (i.e 2x the cost of processing
> a page that is in shared_buffers but did not need to be dirtied),
> which seems sensible to me when considered in the context in which the
> value is actually applied (and not some abstract theoretical context).

Perhaps you meant to decrease vacuumm_cost_page_miss instead of
vacuum_cost_page_dirty?

>
> There are a few reasons why this seems like a good idea now:
>
> * Throttling/delaying VACUUM is only useful as a way of smoothing the
> impact on production queries, which is an important goal, but
> currently we don't discriminate against the cost that we really should
> keep under control (dirtying new pages within VACUUM) very well.
>
> This is due to the aforementioned trends, the use of a strategy ring
> buffer by VACUUM, the fact that indexes are usually vacuumed in
> sequential physical order these days, and many other things that were
> not a factor in 2004.
>
> * There is a real downside to throttling VACUUM unnecessarily, and the
> effects are *non-linear*. On a large table, the oldest xmin cutoff may
> become very old by the time we're only (say) half way through the
> initial table scan in lazy_scan_heap(). There may be relatively little
> work to do because most of the dead tuples won't be before the oldest
> xmin cutoff by that time (VACUUM just cannot keep up). Excessive
> throttling for simple page misses may actually *increase* the amount
> of I/O that VACUUM has to do over time -- we should try to get to the
> pages that actually need to be vacuumed quickly, which are probably
> already dirty anyway (and thus are probably going to add little to the
> cost delay limit in practice). Everything is connected to everything
> else.
>
> * vacuum_cost_page_miss is very much not like random_page_cost, and
> the similar names confuse the issue -- this is not an optimization
> problem. Thinking about VACUUM as unrelated to the workload itself is
> obviously wrong. Changing the default is also an opportunity to clear
> that up.
>
> Even if I am wrong to suggest that a miss within VACUUM should only be
> thought of as 2x as expensive as a hit in some *general* sense, I am
> concerned about *specific* consequences. There is no question about
> picking the best access path here -- we're still going to have to
> access the same blocks in the same way sooner or later. In general I
> think that we should move in the direction of more frequent, cheaper
> VACUUM operations [1], though we've already done a lot of work in that
> direction (e.g. freeze map work).

I agree with that direction.

>
> * Some impact from VACUUM on query performance may in fact be a good thing.
>
> Deferring the cost of vacuuming can only make sense if we'll
> eventually be able to catch up because we're experiencing a surge in
> demand, which seems kind of optimistic -- it seems more likely that
> the GC debt will just grow and grow. Why should the DBA not expect to
> experience some kind of impact, which could be viewed as a natural
> kind of backpressure? The most important thing is consistent
> performance.
>
> * Other recent work such as the vacuum_cleanup_index_scale_factor
> patch has increased the relative cost of index vacuuming in some
> important cases: we don't have a visibility/freeze map for indexes,
> but index vacuuming that doesn't dirty any pages and has a TID kill
> list that's concentrated at the end of the heap/table is pretty cheap
> (the TID binary search is cache efficient/cheap). This change will
> help these workloads by better reflecting the way in which index
> vacuuming can be cheap for append-only tables with a small amount of
> garbage for recently inserted tuples that also got updated/deleted.
>
> * Lowering vacuum_cost_page_miss's default (as opposed to changing
> something else) is a simple and less disruptive way of achieving these
> goals.
>
> This approach seems unlikely to break existing VACUUM-related custom
> settings from current versions that get reused on upgrade. I expect
> little impact on small installations.
>

I recalled the discussion decreasing the default value for
autovacuum_cost_delay from 20ms to 2ms on PostgreSQL 12. I re-read
through the discussion but there wasn't the discussion changing
hit/miss/dirty.

Whereas the change we did for autovacuum_cost_delay affects every
installation, lowering vacuum_cost_page_miss would bring a different
impact depending on workload and database size etc. For example, the
user would have a larger I/O spike in a case where the database
doesn’t fit in the server's RAM and doing vacuuming cold
tables/indexes, for example, when anti-wraparound vacuum.

Lowering vacuum_cost_page_miss basically makes sense to me. But I’m
concerned a bit that the cheaper hardware that has a small RAM etc
would be likely to be affected by this change. Since the database
doesn’t fit in the server’s RAM, pages are unlikely to be on neither
the shared buffers nor OS page cache. Since PostgreSQL's default
values seem conservative to me (which is okay to me), I think there
might be an argument that this change could lead to trouble in such a
cheaper environment that PostgreSQL’s default values are taking care
of.

Regards,

--
Masahiko Sawada
EnterpriseDB:  https://www.enterprisedb.com/



pgsql-hackers by date:

Previous
From: Masahiko Sawada
Date:
Subject: Re: [PATCH] Feature improvement for CLOSE, FETCH, MOVE tab completion
Next
From: Masahiko Sawada
Date:
Subject: Re: Transactions involving multiple postgres foreign servers, take 2