Re: cost based vacuum (parallel) - Mailing list pgsql-hackers

From Masahiko Sawada
Subject Re: cost based vacuum (parallel)
Date
Msg-id CA+fd4k4yES6OcHOXzF+tz1Wp9G+78uXrP6pdvy-3bbBWRxnUCg@mail.gmail.com
Whole thread Raw
In response to Re: cost based vacuum (parallel)  (Amit Kapila <amit.kapila16@gmail.com>)
List pgsql-hackers
On Mon, 4 Nov 2019 at 19:26, Amit Kapila <amit.kapila16@gmail.com> wrote:
>
> On Mon, Nov 4, 2019 at 1:51 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:
> >
> > On Mon, Nov 4, 2019 at 3:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
> > >
> > > I think approach-2 is better in throttling the system as it doesn't
> > > have the drawback of the first approach, but it might be a bit tricky
> > > to implement.
> >
> > I might be missing something but I think that there could be the
> > drawback of the approach-1 even on approach-2 depending on index pages
> > loaded on the shared buffer and the vacuum delay setting.
> >
>
> Can you be a bit more specific about this?

Suppose there are two indexes: one index is loaded at all while
another index isn't. One vacuum worker who processes the former index
hits all pages on the shared buffer but another worker who processes
the latter index read all pages from either OS page cache or disk.
Even if both the cost limit and the cost balance are split evenly
among workers because the cost of page hits and page misses are
different it's possible that one vacuum worker sleeps while other
workers doing I/O.

Regards,

--
Masahiko Sawada            http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



pgsql-hackers by date:

Previous
From: Alvaro Herrera
Date:
Subject: Re: v12 and pg_restore -f-
Next
From: Tom Lane
Date:
Subject: Re: alternative to PG_CATCH