Re: autovacuum prioritization - Mailing list pgsql-hackers

From Peter Geoghegan
Subject Re: autovacuum prioritization
Date
Msg-id CAH2-Wzm3_23Ri4XC43bGX3td2HLWg72-MmX=5Q+i_8n0_VOQjA@mail.gmail.com
Whole thread Raw
In response to autovacuum prioritization  (Robert Haas <robertmhaas@gmail.com>)
Responses Re: autovacuum prioritization
List pgsql-hackers
On Thu, Jan 20, 2022 at 11:24 AM Robert Haas <robertmhaas@gmail.com> wrote:
> In my view, previous efforts in this area have been too simplistic.
> For example, it's been proposed that a table that is perceived to be
> in any kind of wraparound danger ought to get top priority, but I find
> that implausible.

I agree that it doesn't follow that table A should be more of a
priority than table B, either because it has a greater age, or because
its age happens to exceed some actually-arbitrary threshold. But I
will point out that my ongoing work on freezing does make something
along these lines much more plausible. As I said over on that thread,
there is now a kind of "natural variation" among tables, in terms of
relfrozenxid, as a result of tracking the actual oldest XID, and using
that (plus the emphasis on advancing relfrozenxid wherever possible).
And so we'll have a much better idea of what's going on with each
table -- it's typically a precise XID value from the table, from the
recent past.

As of today, on HEAD, the picture is rather fuzzy. If a table has a
really old relminmxid, then which is more likely: 1. there are lots of
remaining MultiXactIds references by the old table, or 2. it has been
a while since that table was last aggressively vacuumed, and it
actually has exactly zero MultiXactId references? I would guess 2
myself, but right now I could never be too sure. But, in a world where
we consistently advance relfrozenxid and relminmxid, then *not*
advancing them (or advancing either by relatively little in one
particular table) becomes a strong signal, in a way that it just isn't
currently.

This is a negative signal, not a positive signal. And as you yourself
go on to say, that's what any new heuristics for this stuff ought to
be exclusively concerned with -- what not to allow to happen, ever.
There is a great deal of diversity among healthy databases; they're
hard to make generalizations about that work. But unhealthy (really
very unhealthy) states are *far* easier to recognize and understand,
without really needing to understand the workload itself at all.

Since we now have the failsafe, the scheduling algorithm can afford to
not give too much special attention to table age until we're maybe
over the 1 billion age mark -- or even 1.5 billion+. But once the
scheduling stuff starts to give table age special attention, it should
probably become the dominant consideration, by far, completely
drowning out any signals about bloat. It's kinda never really supposed
to get that high, so when we do end up there it is reasonable to fully
freak out. Unlike the bloat criteria, the wraparound safety criteria
doesn't seem to have much recognizable space between not worrying at
all, and freaking out.

> A second problem is that, if the earliest need-to-start time is in the
> past, then we definitely are in trouble and had better get to work at
> once, but if it's in the future, that doesn't necessarily mean we're
> safe.

There is a related problem that you didn't mention:
autovacuum_max_workers controls how many autovacuum workers can run at
once, but there is no particular concern for whether or not running
that many workers actually makes sense, in any given scenario. As a
general rule, the system should probably be *capable* of running a
large number of autovacuums at the same time, but never actually do
that (because it just doesn't ever prove necessary). Better to have
the option and never use it than need it and not have it.

> In the meantime, I think a sensible place to start would be to figure
> out some system that makes sensible estimates of how soon we need to
> address bloat, XID wraparound, and MXID wraparound for each table, and
> some system that estimates how long each one will take to vacuum.

I think that it's going to be hard to model how long index vacuuming
will take accurately. And harder still to model which indexes will
adversely impact the user in some way if we delay vacuuming some more.
Might be more useful to start off by addressing how to spread out the
burden of vacuuming over time. The needs of queries matters, but
controlling costs matters too.

One of the most effective techniques is to manually VACUUM when the
system is naturally idle, like at night time. If that could be
quasi-automated, or if the criteria used by autovacuum scheduling gave
just a little weight to how busy the system is right now, then we
would have more slack when the system becomes very busy.

--
Peter Geoghegan



pgsql-hackers by date:

Previous
From: Justin Pryzby
Date:
Subject: Re: Poor performance PostgreSQL13/PostGIS 3.x
Next
From: Greg Nancarrow
Date:
Subject: Re: PublicationActions - use bit flags.