On Tue, Dec 7, 2021 at 12:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
>
> On Tue, Dec 7, 2021 at 6:54 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:
> >
> > On Mon, Dec 6, 2021 at 1:47 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
> > >
> > > On Fri, Dec 3, 2021 at 6:06 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:
> > > >
> > > > >
> > > > > 3. /*
> > > > > * Copy the index bulk-deletion result returned from ambulkdelete and
> > > > > @@ -2940,19 +2935,20 @@ parallel_process_one_index(Relation indrel,
> > > > > * Since all vacuum workers write the bulk-deletion result at different
> > > > > * slots we can write them without locking.
> > > > > */
> > > > > - if (shared_istat && !shared_istat->updated && istat_res != NULL)
> > > > > + if (!pindstats->istat_updated && istat_res != NULL)
> > > > > {
> > > > > - memcpy(&shared_istat->istat, istat_res, sizeof(IndexBulkDeleteResult));
> > > > > - shared_istat->updated = true;
> > > > > + memcpy(&(pindstats->istat), istat_res, sizeof(IndexBulkDeleteResult));
> > > > > + pindstats->istat_updated = true;
> > > > >
> > > > > /* Free the locally-allocated bulk-deletion result */
> > > > > pfree(istat_res);
> > > > > -
> > > > > - /* return the pointer to the result from shared memory */
> > > > > - return &shared_istat->istat;
> > > > > }
> > > > >
> > > > > I think here now we copy the results both for local and parallel
> > > > > cleanup. Isn't it better to write something about that in comments as
> > > > > it is not clear from current comments?
> > > >
> > > > What do you mean by "local cleanup"?
> > > >
> > >
> > > Clean-up performed by leader backend.
> >
> > I might be missing your points but I think the patch doesn't change
> > the behavior around these codes. With the patch, we allocate
> > IndexBulkDeleteResult on DSM for every index but the patch doesn't
> > change the fact that in parallel vacuum/cleanup cases, we copy
> > IndexBulkDeleteResult returned from ambulkdelete() or amvacuumcleanup
> > to DSM space. Non-parallel vacuum doesn't use this function.
> >
>
> I was talking about when we call parallel_vacuum_process_one_index()
> via parallel_vacuum_process_unsafe_indexes() where the leader
> processes the indexes that will be skipped by workers. Isn't that case
> slightly different now because previously in that case we would not
> have done the copy but now we will copy the stats in that case as
> well? Am, I missing something?
>
I got your point. Yes, with the patch, we copy the stats to DSM even
if the index doesn't participate in parallel vacuum at all.
Previously, these statistics are allocated in the local memory.
Updated the comments at the declaration of lvpindstats.
I've attached an updated patch. I've removed 0003 patch that added
regression tests as per discussion. Regarding the terminology like
"bulkdel" and "cleanup" you pointed out, I've done that in 0002 patch
while moving the code to vacuumparallel.c. In that file, we can
consistently use the terms "bulkdel" and "cleanup" instead of "vacuum"
and "cleanup".
Regards,
--
Masahiko Sawada
EDB: https://www.enterprisedb.com/