Re: Parallel heap vacuum - Mailing list pgsql-hackers

From Masahiko Sawada
Subject Re: Parallel heap vacuum
Date
Msg-id CAD21AoA8+=9s3qEF-iTpr_WxjTjdvMOU5t3Rc_XkOpcX1L8gNA@mail.gmail.com
Whole thread Raw
In response to Re: Parallel heap vacuum  (Amit Kapila <amit.kapila16@gmail.com>)
List pgsql-hackers
On Wed, Mar 12, 2025 at 3:05 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
>
> On Wed, Mar 12, 2025 at 3:12 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:
> >
> > On Tue, Mar 11, 2025 at 6:00 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
> > >
> > > On Mon, Mar 10, 2025 at 11:57 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:
> > > >
> > > > On Sun, Mar 9, 2025 at 11:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
> > > > >
> > > > >
> > > > > > However, in the heap vacuum phase, the leader process needed
> > > > > > to process all blocks, resulting in soft page faults while creating
> > > > > > Page Table Entries (PTEs). Without the patch, the backend process had
> > > > > > already created PTEs during the heap scan, thus preventing these
> > > > > > faults from occurring during the heap vacuum phase.
> > > > > >
> > > > >
> > > > > This part is again not clear to me because I am assuming all the data
> > > > > exists in shared buffers before the vacuum, so why the page faults
> > > > > will occur in the first place.
> > > >
> > > > IIUC PTEs are process-local data. So even if physical pages are loaded
> > > > to PostgreSQL's shared buffer (and paga caches), soft page faults (or
> > > > minor page faults)[1] can occur if these pages are not yet mapped in
> > > > its page table.
> > > >
> > >
> > > Okay, I got your point. BTW, I noticed that even for the case where
> > > all the data is in shared_buffers, the performance improvement for
> > > workers greater than two does decrease marginally. Am I reading the
> > > data correctly? If so, what is the theory, and do we have
> > > recommendations for a parallel degree?
> >
> > The decrease you referred to is that the total vacuum execution time?
> >
>
> Right.
>
> > When it comes to the execution time of phase 1, it seems we have good
> > scalability. For example, with 2 workers (i.e.3 workers working
> > including the leader in total) it got about 3x speed up, and with 4
> > workers it got about 5x speed up. Regarding other phases, the phase 3
> > got slower probably because of PTEs stuff, but I don't investigate why
> > the phase 2 also slightly got slower with more than 2 workers.
> >
>
> Could it be possible that now phase-2 needs to access the shared area
> for TIDs, and some locking/unlocking causes such slowdown?

No, TidStore is shared in this case but we don't take a lock on it
during phase 2.

Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com



pgsql-hackers by date:

Previous
From: Ashutosh Sharma
Date:
Subject: Re: Orphaned users in PG16 and above can only be managed by Superusers
Next
From: Oliver Ford
Date:
Subject: Re: Add RESPECT/IGNORE NULLS and FROM FIRST/LAST options