On 3/2/23 1:36 AM, Masahiko Sawada wrote:
For example, I guess we will need to take care of changes of
maintenance_work_mem. Currently we initialize the dead tuple space at
the beginning of lazy vacuum, but perhaps we would need to
enlarge/shrink it based on the new value?
Doesn't the dead tuple space grow as needed? Last I looked we don't allocate up to 1GB right off the bat.
I don't think we need to do anything about that initially, just because the
config can be changed in a more granular way, doesn't mean we have to react to
every change for the current operation.
Perhaps we can mention in the docs that a change to maintenance_work_mem
will not take effect in the middle of vacuuming a table. But, Ithink it probably
isn't needed.
Agreed.
I disagree that there's no need for this. Sure, if maintenance_work_memory is 10MB then it's no big deal to just abandon your current vacuum and start a new one, but the index vacuuming phase with maintenance_work_mem set to say 500MB can take quite a while. Forcing a user to either suck it up or throw everything in the phase away isn't terribly good.
Of course, if the patch that eliminates the 1GB vacuum limit gets committed the situation will be even worse.
While it'd be nice to also honor maintenance_work_mem getting set lower, I don't see any need to go through heroics to accomplish that. Simply recording the change and honoring it for future attempts to grow the memory and on future passes through the heap would be plenty.
All that said, don't let these suggestions get in the way of committing this. Just having the ability to tweak cost parameters would be a win.