On Tue, Dec 17, 2019 at 6:07 PM Masahiko Sawada
<masahiko.sawada@2ndquadrant.com> wrote:
>
> On Fri, 13 Dec 2019 at 15:50, Amit Kapila <amit.kapila16@gmail.com> wrote:
> >
> > > > I think it shouldn't be more than the number with which we have
> > > > created a parallel context, no? If that is the case, then I think it
> > > > should be fine.
> > >
> > > Right. I thought that ReinitializeParallelDSM() with an additional
> > > argument would reduce DSM but I understand that it doesn't actually
> > > reduce DSM but just have a variable for the number of workers to
> > > launch, is that right?
> > >
> >
> > Yeah, probably, we need to change the nworkers stored in the context
> > and it should be lesser than the value already stored in that number.
> >
> > > And we also would need to call
> > > ReinitializeParallelDSM() at the beginning of vacuum index or vacuum
> > > cleanup since we don't know that we will do either index vacuum or
> > > index cleanup, at the end of index vacum.
> > >
> >
> > Right.
>
> I've attached the latest version patch set. These patches requires the
> gist vacuum patch[1]. The patch incorporated the review comments.
>
I was analyzing your changes related to ReinitializeParallelDSM() and
it seems like we might launch more number of workers for the
bulkdelete phase. While creating a parallel context, we used the
maximum of "workers required for bulkdelete phase" and "workers
required for cleanup", but now if the number of workers required in
bulkdelete phase is lesser than a cleanup phase(as mentioned by you in
one example), then we would launch more workers for bulkdelete phase.
--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com