Re: GIN pending list cleanup during autoanalyze blocks cleanup by VACUUM - Mailing list pgsql-hackers

From Peter Geoghegan
Subject Re: GIN pending list cleanup during autoanalyze blocks cleanup by VACUUM
Date
Msg-id CAH2-Wzk6+A=tAx62qrG9=43QSDzbVYDHQyrEs_FRPuGW-CkNXA@mail.gmail.com
Whole thread Raw
In response to Re: GIN pending list cleanup during autoanalyze blocks cleanup by VACUUM  (Masahiko Sawada <sawada.mshk@gmail.com>)
List pgsql-hackers
On Thu, Oct 14, 2021 at 12:50 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:
> Looking at the original commit, as you mentioned, ISTM performing
> pending list cleanup during (auto)analyze (and analyze_only) was
> introduced to perform the pending list cleanup on insert-only tables.
> Now that we have autovacuum_vacuum_insert_threshold, we don’t
> necessarily need to rely on that.

Right.

> On the other hand, I still see a little value in performing the
> pending list cleanup during autoanalyze. For example, if the user
> wants to clean up the pending list frequently in the background (so
> that it's not triggered in the INSERT path), it might be better to do
> that during autoanalyze than autovacuum. If the table has garbage,
> autovacuum has to vacuum all indexes and the table, taking a very long
> time. But autoanalyze can be done in a shorter time. If we trigger
> autoanalyze frequently and perform pending list cleanup, the pending
> list cleanup can also be done in a relatively short time, preventing
> MVCC snapshots from being held for a long time.

I agree that that's true -- there is at least a little value. But,
that's just an accident of history.

Today, ginvacuumcleanup() won't need to scan the whole index in the
autoanalyze path that I'm concerned about - it will just do pending
list insertion. This does mean that the autoanalyze path taken within
ginvacuumcleanup() should be a lot faster than a similar cleanup-only
call to ginvacuumcleanup(). But...is there actually a good reason for
that? Why should a cleanup-only VACUUM ANALYZE (i.e. a V-A where the
VACUUM does not see any heap-page LP_DEAD items) be so much slower
than a similar ANALYZE against the same table, under the same
conditions? I see no good reason.

Ideally, ginvacuumcleanup() would behave like btvacuumcleanup() and
hashvacuumcleanup(). That is, it should not unnecessarily scan the
index (even when used by VACUUM). In other words, it should have
something like the "Skip full index scan" mechanism that you added to
nbtree in commit 857f9c36. That way it just wouldn't make sense to
have this autoanalyze path anymore -- it would no longer have this
accidental advantage over a regular ginvacuumcleanup() call made from
VACUUM.

More generally, I think it's a big problem that ginvacuumcleanup() has
so many weird special cases. Why does the full_clean argument to
ginInsertCleanup() even exist? It makes the behavior inside
ginInsertCleanup() vary based on whether we're in autovacuum (or
autoanalyze) for no reason at all. I think that the true reason for
this is simple: the code in ginInsertCleanup() is *bad*. full_clean
was just forgotten about by one of its many bug fixes since the code
quality started to go down. Somebody (who shall remain nameless) was
just careless when maintaining that code.

VACUUM should be in charge of index AMs -- not the other way around.
It's good that the amvacuumcleanup() interface is so flexible, but I
think that GIN is over relying on this flexibility. Ideally, VACUUM
wouldn't have to think about the specific index AMs involved at all --
why should GIN be so different to GiST, nbtree, or hash? If GIN (or
any other index AM) behaves like a special little snowflake, with its
own unique performance characteristics, then it is harder to improve
certain important things inside VACUUM. For example, the conveyor belt
index vacuuming design from Robert Haas won't work as well as it
could.

> Therefore, I personally think that it's better to eliminate
> analyze_only code after introducing a way that allows us to perform
> the pending list cleanup more frequently. I think that the idea of
> Jaime Casanova's patch is a good solution.

I now think that it would be better to fix ginvacuumcleanup() in the
way that I described (I changed my mind). Jaime's patch does seem like
a good idea to me, but not for this. It makes sense to have that, for
the reasons that Jaime said himself.

> I'm not very positive about back-patching. The first reason is what I
> described above; I still see little value in performing pending list
> cleanup during autoanalyze. Another reason is that if the user relies
> on autoanalyze to perform pending list cleanup, they have to enable
> autovacuum_vacuum_insert_threshold instead during the minor upgrade.
> Since it also means to trigger autovacuum in more cases I think it
> will have a profound impact on the existing system.

I have to admit that when I wrote my earlier email, I was still a
little shocked by the problem -- which is not a good state of mind
when making a decision about backpatching. But, I now think that I
underappreciated the risk of making the problem worse in the
backbranches, rather than better. I won't be backpatching anything
here.

The problem report from Nikolay was probably an unusually bad case,
where the pending list cleanup/insertion was particularly expensive.
This *is* really awful behavior, but that alone isn't a good enough
reason to backpatch.

--
Peter Geoghegan



pgsql-hackers by date:

Previous
From: Japin Li
Date:
Subject: Re: [Bug] Logical Replication failing if the DateStyle is different in Publisher & Subscriber
Next
From: Simon Riggs
Date:
Subject: Re: Next Steps with Hash Indexes