Re: POC: Parallel processing of indexes in autovacuum - Mailing list pgsql-hackers

From Sami Imseih
Subject Re: POC: Parallel processing of indexes in autovacuum
Date
Msg-id CAA5RZ0s4eXW1V+fqu-WDBkFh+h43dYke81Tht1V0sFRJ5vjX2Q@mail.gmail.com
Whole thread Raw
In response to Re: POC: Parallel processing of indexes in autovacuum  (Daniil Davydov <3danissimo@gmail.com>)
Responses Re: POC: Parallel processing of indexes in autovacuum
List pgsql-hackers
> On Fri, May 2, 2025 at 11:58 PM Sami Imseih <samimseih@gmail.com> wrote:
> >
> > I am generally -1 on the idea of autovacuum performing parallel
> > index vacuum, because I always felt that the parallel option should
> > be employed in a targeted manner for a specific table. if you have a bunch
> > of large tables, some more important than others, a/c may end
> > up using parallel resources on the least important tables and you
> > will have to adjust a/v settings per table, etc to get the right table
> > to be parallel index vacuumed by a/v.
>
> Hm, this is a good point. I think I should clarify one moment - in
> practice, there is a common situation when users have one huge table
> among all databases (with 80+ indexes created on it). But, of course,
> in general there may be few such tables.
> But we can still adjust the autovac_idx_parallel_min_rows parameter.
> If a table has a lot of dead tuples => it is actively used => table is
> important (?).
> Also, if the user can really determine the "importance" of each of the
> tables - we can provide an appropriate table option. Tables with this
> option set will be processed in parallel in priority order. What do
> you think about such an idea?

I think in most cases, the user will want to determine the priority of
a table getting parallel vacuum cycles rather than having the autovacuum
determine the priority. I also see users wanting to stagger
vacuums of large tables with many indexes through some time period,
and give the
tables the full amount of parallel workers they can afford at these
specific periods
of time. A/V currently does not really allow for this type of
scheduling, and if we
give some kind of GUC to prioritize tables, I think users will constantly have
to be modifying this priority.

I am basing my comments on the scenarios I have seen on the field, and others
may have a different opinion.

> > Also, with the TIDStore improvements for index cleanup, and the practical
> > elimination of multi-pass index vacuums, I see this being even less
> > convincing as something to add to a/v.
>
> If I understood correctly, then we are talking about the fact that
> TIDStore can store so many tuples that in fact a second pass is never
> needed.
> But the number of passes does not affect the presented optimization in
> any way. We must think about a large number of indexes that must be
> processed. Even within a single pass we can have a 40% increase in
> speed.

I am not discounting that a single table vacuum with many indexes will
maybe perform better with parallel index scan, I am merely saying that
the TIDStore optimization now makes index vacuums better and perhaps
there is less of an incentive to use parallel.

> > Now, If I am going to allocate extra workers to run vacuum in parallel, why
> > not just provide more autovacuum workers instead so I can get more tables
> > vacuumed within a span of time?
>
> For now, only one process can clean up indexes, so I don't see how
> increasing the number of a/v workers will help in the situation that I
> mentioned above.
> Also, we don't consume additional resources during autovacuum in this
> patch - total number of a/v workers always <= autovacuum_max_workers.

Increasing a/v workers will not help speed up a specific table, what I
am suggesting is that instead of speeding up one table, let's just allow
other tables to not be starved of a/v cycles due to lack of a/v workers.

--
Sami



pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: [PoC] Federated Authn/z with OAUTHBEARER
Next
From: Tomas Vondra
Date:
Subject: Re: Parallel CREATE INDEX for GIN indexes