> Another idea I came up with is that we can wait for all index vacuums
> to finish while checking and updating the progress information, and
> then calls WaitForParallelWorkersToFinish after confirming all index
> status became COMPLETED. That way, we don’t need to change the
> parallel query infrastructure. What do you think?
Thinking about this a bit more, the idea of using
WaitForParallelWorkersToFinish
Will not work if you have a leader worker that is
stuck on a large index. The progress will not be updated
until the leader completes. Even if the parallel workers
finish.
What are your thought about piggybacking on the
vacuum_delay_point to update progress. The leader can
perhaps keep a counter to update progress every few thousand
calls to vacuum_delay_point.
This goes back to your original idea to keep updating progress
while scanning the indexes.
/*
* vacuum_delay_point --- check for interrupts and cost-based delay.
*
* This should be called in each major loop of VACUUM processing,
* typically once per page processed.
*/
void
vacuum_delay_point(void)
{
---
Sami Imseih
Amazon Web Services