On Mon, Mar 30, 2026 at 7:17 AM SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com> wrote: > > Thank you for working on this, very useful feature. Sharing a few thoughts: > > 1. Shouldn't we also cap by max_parallel_workers to avoid wasting DSM resources in parallel_vacuum_compute_workers?
Actually, autovacuum_max_parallel_workers is already limited by max_parallel_workers. It is not clear for me why we allow setting this GUC higher than max_parallel_workers, but if this happens, I think it is a user's misconfiguration.
Isn’t there a wasted effort here if user misconfigures because anyway we cannot launch that many workers? I suggest making a check here.
> 2. Is it intentional that other autovacuum workers not yield cost limits to the parallel auto vacuum workers? Cost limits are distributed first equally to the autovacuum workers. > and then they share that. Therefore, parallel workers will be heavily throttled. IIUC, this problem doesn't exist with manual vacuum. > If we don't fix this, at least we should document this.
Parallel a/v workers inherit cost based parameters (including the vacuum_cost_limit) from the leader worker. Do you mean that this can be too low value for parallel operation? If so, user can manually increase the vacuum_cost_limit reloption for those tables, where parallel a/v sleeps too much (due to cost delay).
They don’t inherit but share, isn’t it?
BTW, describing the cost limit propagation to the parallel a/v workers is worth mentioning in the documentation. I'll add it in the next patch version.
Yes, that helps
> 3. Additionally, is there a point where, based on the cost limits, launching additional workers becomes counterproductive compared to running fewer workers and preventing it?
I don't think that we can possibly find a universal limit that will be appropriate for all possible configurations. By now we are using a pretty simple formula for parallel degree calculation. Since user have several ways to affect this formula, I guess that there will be no problems with it (except my concerns about opt-out style).