On Wed, Apr 1, 2020 at 8:16 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
>
> On Tue, Mar 31, 2020 at 7:32 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:
> >
> > While testing I have found one issue. Basically, during a parallel
> > vacuum, it was showing more number of
> > shared_blk_hits+shared_blks_read. After, some investigation, I found
> > that during the cleanup phase nworkers are -1, and because of this we
> > didn't try to launch worker but "lps->pcxt->nworkers_launched" had the
> > old launched worker count and shared memory also had old buffer read
> > data which was never updated as we did not try to launch the worker.
> >
> > diff --git a/src/backend/access/heap/vacuumlazy.c
> > b/src/backend/access/heap/vacuumlazy.c
> > index b97b678..5dfaf4d 100644
> > --- a/src/backend/access/heap/vacuumlazy.c
> > +++ b/src/backend/access/heap/vacuumlazy.c
> > @@ -2150,7 +2150,8 @@ lazy_parallel_vacuum_indexes(Relation *Irel,
> > IndexBulkDeleteResult **stats,
> > * Next, accumulate buffer usage. (This must wait for the workers to
> > * finish, or we might get incomplete data.)
> > */
> > - for (i = 0; i < lps->pcxt->nworkers_launched; i++)
> > + nworkers = Min(nworkers, lps->pcxt->nworkers_launched);
> > + for (i = 0; i < nworkers; i++)
> > InstrAccumParallelQuery(&lps->buffer_usage[i]);
> >
> > It worked after the above fix.
> >
>
> Good catch. I think we should not even call
> WaitForParallelWorkersToFinish for such a case. So, I guess the fix
> could be,
>
> if (workers > 0)
> {
> WaitForParallelWorkersToFinish();
> for (i = 0; i < lps->pcxt->nworkers_launched; i++)
> InstrAccumParallelQuery(&lps->buffer_usage[i]);
> }
>
> or something along those lines.
Hmm, Right!
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com