Re: POC: Parallel processing of indexes in autovacuum - Mailing list pgsql-hackers

From Masahiko Sawada
Subject Re: POC: Parallel processing of indexes in autovacuum
Date
Msg-id CAD21AoDtPpkkQ_h1yf4oTx1qn4SRdTeVY3qs+9J07fYqa_4Gww@mail.gmail.com
Whole thread Raw
In response to Re: POC: Parallel processing of indexes in autovacuum  (Daniil Davydov <3danissimo@gmail.com>)
Responses Re: POC: Parallel processing of indexes in autovacuum
List pgsql-hackers
On Sun, Jul 6, 2025 at 5:00 PM Daniil Davydov <3danissimo@gmail.com> wrote:
>
> Hi,
>
> On Fri, Jul 4, 2025 at 9:21 PM Matheus Alcantara
> <matheusssilv97@gmail.com> wrote:
> >
> > The "autovacuum_max_parallel_workers" declared on guc_tables.c mention
> > that is capped by "max_worker_process":
> > +       {
> > +               {"autovacuum_max_parallel_workers", PGC_SIGHUP, VACUUM_AUTOVACUUM,
> > +                       gettext_noop("Maximum number of parallel autovacuum workers, that can be taken from
bgworkerspool."), 
> > +                       gettext_noop("This parameter is capped by \"max_worker_processes\" (not by
\"autovacuum_max_workers\"!)."),
> > +               },
> > +               &autovacuum_max_parallel_workers,
> > +               0, 0, MAX_BACKENDS,
> > +               check_autovacuum_max_parallel_workers, NULL, NULL
> > +       },
> >
> > IIUC the code, it cap by "max_worker_process", but Masahiko has mention
> > on [1] that it should be capped by max_parallel_workers.
> >
>
> Thanks for looking into it!
>
> To be honest, I don't think that this parameter should be explicitly
> capped at all.
> Other parallel operations (for example parallel index build or VACUUM
> PARALLEL) just request as many workers as they want without looking at
> 'max_parallel_workers'.
> And they will not complain, if not all requested workers were launched.
>
> Thus, even if 'autovacuum_max_parallel_workers' is higher than
> 'max_parallel_workers' the worst that can happen is that not all
> requested workers will be running (which is a common situation).
> Users can handle it by looking for logs like "planned vs. launched"
> and increasing 'max_parallel_workers' if needed.
>
> On the other hand, obviously it doesn't make sense to request more
> workers than 'max_worker_processes' (moreover, this parameter cannot
> be changed as easily as 'max_parallel_workers').
>
> I will keep the 'max_worker_processes' limit, so autovacuum will not
> waste time initializing a parallel context if there is no chance that
> the request will succeed.
> But it's worth remembering that actually the
> 'autovacuum_max_parallel_workers' parameter will always be implicitly
> capped by 'max_parallel_workers'.
>
> What do you think about it?
>
> > But the postgresql.conf.sample say that it is limited by
> > max_parallel_workers:
> > +#autovacuum_max_parallel_workers = 0   # disabled by default and limited by max_parallel_workers
>
> Good catch, I'll fix it.
>
> > ---
> >
> > We actually capping the autovacuum_max_parallel_workers by
> > max_worker_process-1, so we can't have 10 max_worker_process and 10
> > autovacuum_max_parallel_workers. Is that correct?
>
> Yep. The explanation can be found just above in this letter.
>
> > ---
> >
> > Locking unnecessary the AutovacuumLock if none if the if's is true can
> > cause some performance issue here? I don't think that this would be a
> > serious problem because this code will only be called if the
> > configuration file is changed during the autovacuum execution right? But
> > I could be wrong, so just sharing my thoughts on this (still learning
> > about [auto]vacuum code).
> >
> > +
> > +/*
> > + * Make sure that number of available parallel workers corresponds to the
> > + * autovacuum_max_parallel_workers parameter (after it was changed).
> > + */
> > +static void
> > +check_parallel_av_gucs(int prev_max_parallel_workers)
> > +{
> > +       LWLockAcquire(AutovacuumLock, LW_EXCLUSIVE);
> > +
> > +       if (AutoVacuumShmem->av_available_parallel_workers >
> > +               autovacuum_max_parallel_workers)
> > +       {
> > +               Assert(prev_max_parallel_workers > autovacuum_max_parallel_workers);
> > +
> >
>
> This function may be called by a/v launcher when we already have some
> a/v workers running.
> A/v workers can change the
> AutoVacuumShmem->av_available_parallel_workers value, so I think we
> should acquire appropriate lock before reading it.
>
> > Typo on "exeed"
> >
> > +               /*
> > +                * Number of available workers must not exeed limit.
> > +                *
> > +                * Note, that if some parallel autovacuum workers are running at this
> > +                * moment, available workers number will not exeed limit after releasing
> > +                * them (see ParallelAutoVacuumReleaseWorkers).
> > +               */
>
> Oops. I'll fix it.
>
> > ---
> >
> > I'm not seeing an usage of this macro?
> > +/*
> > + * RelationGetParallelAutovacuumWorkers
> > + *             Returns the relation's parallel_autovacuum_workers reloption setting.
> > + *             Note multiple eval of argument!
> > + */
> > +#define RelationGetParallelAutovacuumWorkers(relation, defaultpw) \
> > +       ((relation)->rd_options ? \
> > +        ((StdRdOptions *) (relation)->rd_options)->autovacuum.parallel_autovacuum_workers : \
> > +        (defaultpw))
> > +
> >
>
> Yes, this is the relic of a past implementation. I'll delete this macro.
>
> >
> > I've made some tests and I can confirm that is working correctly for
> > what I can see. I think that would be to start include the documentation
> > changes, what do you think?
> >
>
> It sounds tempting :)
> But perhaps first we should agree on the limitation of the
> 'autovacuum_max_parallel_workers' parameter.
>
> Please, see v6 patches :
> 1) Fixed typos in autovacuum.c and postgresql.conf.sample
> 2) Removed unused macro 'RelationGetParallelAutovacuumWorkers'
>

Thank you for updating the patch! Here are some review comments:

---
-   shared->maintenance_work_mem_worker =
-       (nindexes_mwm > 0) ?
-       maintenance_work_mem / Min(parallel_workers, nindexes_mwm) :
-       maintenance_work_mem;
+
+   if (AmAutoVacuumWorkerProcess())
+       shared->maintenance_work_mem_worker =
+           (nindexes_mwm > 0) ?
+           autovacuum_work_mem / Min(parallel_workers, nindexes_mwm) :
+           autovacuum_work_mem;
+   else
+       shared->maintenance_work_mem_worker =
+           (nindexes_mwm > 0) ?
+           maintenance_work_mem / Min(parallel_workers, nindexes_mwm) :
+           maintenance_work_mem;

Since we have a similar code in dead_items_alloc() I think it's better
to follow it:

    int         vac_work_mem = AmAutoVacuumWorkerProcess() &&
        autovacuum_work_mem != -1 ?
        autovacuum_work_mem : maintenance_work_mem;

That is, we calculate vac_work_mem first and then calculate
shared->maintenance_work_mem_worker. I think it's more straightforward
as the formula of maintenance_work_mem_worker is the same whereas the
amount of memory used for vacuum and autovacuum varies.

---
+   nlaunched_workers = pvs->pcxt->nworkers_launched; /* remember this value */
    DestroyParallelContext(pvs->pcxt);
+
+   /* Release all launched (i.e. reserved) parallel autovacuum workers. */
+   if (AmAutoVacuumWorkerProcess())
+       ParallelAutoVacuumReleaseWorkers(nlaunched_workers);
+

Why don't we release workers before destroying the parallel context?

---
@@ -558,7 +576,9 @@ parallel_vacuum_compute_workers(Relation *indrels,
int nindexes, int nrequested,
     * We don't allow performing parallel operation in standalone backend or
     * when parallelism is disabled.
     */
-   if (!IsUnderPostmaster || max_parallel_maintenance_workers == 0)
+   if (!IsUnderPostmaster ||
+       (autovacuum_max_parallel_workers == 0 && AmAutoVacuumWorkerProcess()) ||
+       (max_parallel_maintenance_workers == 0 && !AmAutoVacuumWorkerProcess()))
        return 0;

    /*
@@ -597,15 +617,17 @@ parallel_vacuum_compute_workers(Relation
*indrels, int nindexes, int nrequested,
    parallel_workers = (nrequested > 0) ?
        Min(nrequested, nindexes_parallel) : nindexes_parallel;

-   /* Cap by max_parallel_maintenance_workers */
-   parallel_workers = Min(parallel_workers, max_parallel_maintenance_workers);
+   /* Cap by GUC variable */
+   parallel_workers = AmAutoVacuumWorkerProcess() ?
+       Min(parallel_workers, autovacuum_max_parallel_workers) :
+       Min(parallel_workers, max_parallel_maintenance_workers);

    return parallel_workers;

How about calculating the maximum number of workers once and using it
in the above both places?

---
+   /* Check how many workers can provide autovacuum. */
+   if (AmAutoVacuumWorkerProcess() && nworkers > 0)
+       nworkers = ParallelAutoVacuumReserveWorkers(nworkers);
+

I think it's better to move this code to right after setting "nworkers
= Min(nworkers, pvs->pcxt->nworkers);" as it's a more related code.

The comment needs to be updated as it doesn't match what the function
actually does (i.e. reserving the workers).

---
        /* Reinitialize parallel context to relaunch parallel workers */
        if (num_index_scans > 0)
+       {
            ReinitializeParallelDSM(pvs->pcxt);

+           /*
+            * Release all launched (i.e. reserved) parallel autovacuum
+            * workers.
+            */
+           if (AmAutoVacuumWorkerProcess())
+               ParallelAutoVacuumReleaseWorkers(pvs->pcxt->nworkers_launched);
+       }

Why do we need to release all workers here? If there is a reason, we
should mention it as a comment.

---
@@ -706,16 +751,16 @@
parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int
num_index_scan

        if (vacuum)
            ereport(pvs->shared->elevel,
-                   (errmsg(ngettext("launched %d parallel vacuum
worker for index vacuuming (planned: %d)",
-                                    "launched %d parallel vacuum
workers for index vacuuming (planned: %d)",
+                   (errmsg(ngettext("launched %d parallel %svacuum
worker for index vacuuming (planned: %d)",
+                                    "launched %d parallel %svacuum
workers for index vacuuming (planned: %d)",
                                     pvs->pcxt->nworkers_launched),
-                           pvs->pcxt->nworkers_launched, nworkers)));
+                           pvs->pcxt->nworkers_launched,
AmAutoVacuumWorkerProcess() ? "auto" : "", nworkers)));

The "%svacuum" part doesn't work in terms of translation. We need to
construct the whole sentence instead. But do we need this log message
change in the first place? IIUC autovacuums write logs only when the
execution time exceed the log_autovacuum_min_duration (or its
reloption). The patch unconditionally sets LOG level for autovacuums
but I'm not sure it's consistent with other autovacuum logging
behavior:

+   int         elevel = AmAutoVacuumWorkerProcess() ||
+       vacrel->verbose ?
+       INFO : DEBUG2;

---
- *   Support routines for parallel vacuum execution.
+ *   Support routines for parallel [auto]vacuum execution.

The patch includes the change of "vacuum" -> "[auto]vacuum" in many
places. While I think we need to mention that vacuumparallel.c
supports autovacuums I'm not sure we really need all of them. If we
accept this style, we would require for all subsequent changes to
follow it, which could increase maintenance costs.

---
@@ -299,6 +301,7 @@ typedef struct
    WorkerInfo  av_startingWorker;
    AutoVacuumWorkItem av_workItems[NUM_WORKITEMS];
    pg_atomic_uint32 av_nworkersForBalance;
+   uint32 av_available_parallel_workers;

Other field names seem to have consistent naming rules; 'av_' prefix
followed by name in camel case. So how about renaming it to
av_freeParallelWorkers or something along those lines?

---
+int
+ParallelAutoVacuumReserveWorkers(int nworkers)
+{

Other exposed functions have "AutoVacuum" prefix, so how about
renaming it to AutoVacuumReserveParallelWorkers() or something along
those lines?

---
+   if (AutoVacuumShmem->av_available_parallel_workers < nworkers)
+   {
+       /* Provide as many workers as we can. */
+       can_launch = AutoVacuumShmem->av_available_parallel_workers;
+       AutoVacuumShmem->av_available_parallel_workers = 0;
+   }
+   else
+   {
+       /* OK, we can provide all requested workers. */
+       can_launch = nworkers;
+       AutoVacuumShmem->av_available_parallel_workers -= nworkers;
+   }

Can we simplify this logic as follows?

 can_launch = Min(AutoVacuumShmem->av_available_parallel_workers, nworkers);
 AutoVacuumShmem->av_available_parallel_workers -= can_launch;

Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com



pgsql-hackers by date:

Previous
From: "CAI, Mengjuan"
Date:
Subject: Re: The same 2PC data maybe recovered twice
Next
From: Dean Rasheed
Date:
Subject: Re: [PATCH] Generate random dates/times in a specified range