Hi,
On Tue, Jul 8, 2025 at 10:20 PM Matheus Alcantara
<matheusssilv97@gmail.com> wrote:
>
> On Sun Jul 6, 2025 at 5:00 AM -03, Daniil Davydov wrote:
> > I will keep the 'max_worker_processes' limit, so autovacuum will not
> > waste time initializing a parallel context if there is no chance that
> > the request will succeed.
> > But it's worth remembering that actually the
> > 'autovacuum_max_parallel_workers' parameter will always be implicitly
> > capped by 'max_parallel_workers'.
> >
> > What do you think about it?
> >
>
> It make sense to me. The main benefit that I see on capping
> autovacuum_max_parallel_workers parameter is that users will see
> "invalid value for parameter "autovacuum_max_parallel_workers"" error on
> logs instead of need to search for "planned vs. launched", which can be
> trick if log_min_messages is not set to at least the info level (the
> default warning level will not show this log message).
>
I think I can refer to (for example) 'max_parallel_workers_per_gather'
parameter, which allows
setting values higher than 'max_parallel_workers' without throwing an
error or warning.
'autovacuum_max_parallel_workers' will behave the same way.
> If we decide to not cap this on code I think that at least would be good to mention this
> on documentation.
Sure, it is worth noticing in documentation.
--
Best regards,
Daniil Davydov