Re: POC: Parallel processing of indexes in autovacuum - Mailing list pgsql-hackers
| From | Daniil Davydov |
|---|---|
| Subject | Re: POC: Parallel processing of indexes in autovacuum |
| Date | |
| Msg-id | CAJDiXgjgn87sH1-MmONPKkeYJG83C0ChrYkYn9UcRonLhOOfOw@mail.gmail.com Whole thread |
| In response to | Re: POC: Parallel processing of indexes in autovacuum (Masahiko Sawada <sawada.mshk@gmail.com>) |
| Responses |
Re: POC: Parallel processing of indexes in autovacuum
|
| List | pgsql-hackers |
Hi, On Wed, Mar 11, 2026 at 1:14 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote: > > On Tue, Mar 3, 2026 at 10:59 PM Daniil Davydov <3danissimo@gmail.com> wrote: > > > > So, my point was : why should we have this explicit limitation if it > > 1) doesn't guard us from something bad and 2) can be violated at any time > > (via ALTER SYSTEM SET ...). > > > > Now it seems to me that limiting our parameter by max_parallel_workers is > > more about grouping of logically related parameters, not a practical necessity. > > I believe there is also a benefit for users when they want to disable > all parallel behavior. If av_max_parallel_workers is in > max_parallel_worker group, they would have to set just > max_parallel_workers to 0. Otherwise, they would have to set both > max_parallel_workers and av_max_parallel_workers. > OK, thank you for the explanation! > I agree that we need to make sure that parallel workers are released > even during ERROR handling, but I don't think it's important to check > the places where AutoVacuumReleaesAllParallelWorkers() is called, by > using regression tests. It's more important and future-proof that we > check if all workers are released according to the shmem data. In > other words, even if we call AutoVacuumReleaseAllParallelWorkers() in > an unexpected call path in an ERROR case, it's still okay if we > successfully release all workers in the end. These regression tests > should test these database behavior but not what specific code path > taken. Indeed, I can't remember where else in the tests we check the passage along specific code paths in this way. > If we can check if all workers are released by checking the > shmem, why do we need to check further where they are released? My point of view was that this code path is so important that we need to test it (important in terms of performance). But of course even if for some reason we cannot release workers inside the try/catch block, we can still be sure that they will be released somewhere else, because we have tested it. > I think we should not make the function complex just for testing > purposes. My point is that what we should be testing is the behavior > -- specifically whether parallel workers are released at the expected > timing -- rather than focusing on whether a specific code path was > executed. You've convinced me :) I'll add a log to the "ReleaseWorkers" function and tests will only search for it. > While I agree that showing only two numbers might lack some > information for users, I guess the same is true for > max_parallel_maintenance_workers or other parallel queries related to > GUC parameters. For instance, suppose we set > max_parallel_maintenance_workers to 2, if the table has (large enough) > 4 indexes, we would plan to execute a parallel vacuum with 2 workers > instead of 4 due to max_parallel_maintenance_worker shortage and it's > even possible that only 1 worker can launch due to > max_worker_processes shortage. In this case, we currently consider > that 2 workers are planned. Isn't it the same situation as the case > where we reserved 2 parallel vacuum workers for autovacuum for the > table with 4 indexes? I don't think that examples with other "max_parallel_" parameters will be appropriate, because these parameters are limiting the number of parallel workers for *single* operation/executor node/... . At the same time, av_max_parallel_workers limits the total number of parallel workers across all a/v leaders. Regarding the situation that you provided : The number of planned workers is reduced inside the parallel_vacuum_compute_workers due to the max_parallel_maintenance_workers limit. I.e. we cannot plan more workers than required by the config, and it's completely OK No one expects the number of "planned workers" to be more than max_parallel_maintenance_workers. IMO there is no need to make efforts to track the shortage of max_parallel_maintenance_workers for the VACUUM (PARALLEL), because this parameter just plays the role of a limiter. We will consider only the shortage of max_parallel_workers, that can be determined by looking at "planned vs. launched". And here is a difference with a parallel autovacuum : av_max_parallel_workers is considered twice : in the "parallel_vacuum_compute_workers" and "ReserveWorkers" functions. So the low number of launched workers can be explained by the shortage of both av_max_parallel_workers and max_parallel_workers. Since we want to distinguish between these cases, we have added the "nreserved" concept. I see that few modules can report something like "out of background worker slots" when they cannot launch more workers due to max_parallel_workers shortage (but modules depending on the "parallel.c" logic don't do so). This fact gave me another idea : If we don't want to log "nreserved" or some other similar value, maybe we should add logging after the "ReserveWorkers" function? I.e. if some workers cannot be reserved, we can emit a log like "out of parallel autovacuum workers. you should increase the av_max_parallel_workers parameter". Having this log can help the user distinguish between max_parallel_workers/av_max_parallel_workers shortage situations. What do you think? Summary : 1) I think that we should not look at maintenance vacuum while considering how to inform the user about parameters shortage for autovacuum, because we have a more complicated situation in case of autovacuum. 2) I suggest adding a separate log that will be emitted every time we are unable to start workers due to a shortage of av_max_parallel_workers. > > I don't mind, but I don't quite understand the reason. We assume that the > > minimal value for both variables is 0. Why shouldn't we use unsigned > > data type? > > Unsigned integers should be used for bit masks, flags, or when we need > to handle more than INT_MAX. Signed integers are preferable in other > cases as we're using signed integers for controlling the number of > workers and autovacuum_max_parallel_workers is defined as signed int > (which could be stored to AutoVacuumShmem->av_maxParallelWorkers). I understood, thank you. > * 0001 patch: > > + /* Cannot release more workers than reserved */ > + Assert(nworkers <= av_nworkers_reserved); > > I think it's better to use Min() to cap the number of workers to be > released by av_nworkers_reserved as Assert() won't work in release > builds. I agree. > * 0004 patch: > > Can we write the same test cases while not relying on the 0002 patch > (i.e., worker usage logging)? We check the worker usage log at two > places in the regression tests. The idea is that we write the number > of workers planned, reserved, and launched in DEBUG log level and > check these logs in the regression tests. The patch 0001, 0003, and > 0004 can be merged before push while we might want more discussion on > the 0002 patch. Possibly we can introduce a new injection point, or a new log for it. But I assume that the subject of discussion in patch 0002 is the "nreserved" logic, and "nlaunched/nplanned" logic does not raise any questions. I suggest splitting the 0002 patch into two parts : 1) basic logic and 2) additional logic with nreserved or something else. The second part can be discussed in isolation from the patch set. If we do this, we may not have to change the tests. What do you think? Thank you for the review! Please, see the updated set of patches. I haven't touched patch 0002 yet, because I'd like to hear your opinion on my suggestions above first. -- Best regards, Daniil Davydov
Attachment
- v25-0003-Cost-based-parameters-propagation-for-parallel-a.patch
- v25-0004-Tests-for-parallel-autovacuum.patch
- v25-0002-Logging-for-parallel-autovacuum.patch
- v25-0001-Parallel-autovacuum.patch
- v25-0005-Documentation-for-parallel-autovacuum.patch
- v24--v25-diff-for-0004.patch
- v24--v25-diff-for-0001.patch
pgsql-hackers by date: