Re: maintenance_work_mem = 64kB doesn't work for vacuum - Mailing list pgsql-hackers

From Masahiko Sawada
Subject Re: maintenance_work_mem = 64kB doesn't work for vacuum
Date
Msg-id CAD21AoCvn2CVJfhB=H_+Z7gVeQ733mRk1BOrQfwSiTiU+mQtFA@mail.gmail.com
Whole thread Raw
In response to Re: maintenance_work_mem = 64kB doesn't work for vacuum  (David Rowley <dgrowleyml@gmail.com>)
Responses Re: maintenance_work_mem = 64kB doesn't work for vacuum
List pgsql-hackers
On Sun, Mar 9, 2025 at 7:03 PM David Rowley <dgrowleyml@gmail.com> wrote:
>
> On Mon, 10 Mar 2025 at 10:30, David Rowley <dgrowleyml@gmail.com> wrote:
> > Could you do something similar to what's in hash_agg_check_limits()
> > where we check we've got at least 1 item before bailing before we've
> > used up the all the prescribed memory?  That seems like a safer coding
> > practise as if in the future the minimum usage for a DSM segment goes
> > above 256KB, the bug comes back again.
>
> FWIW, I had something like the attached in mind.
>

Thank you for the patch! I like your idea. This means that even if we
set maintenance_work_mem to 64kB the memory usage would not actually
be limited to 64kB but probably we're fine as it's primarily testing
purpose.

Regarding that patch, we need to note that the lpdead_items is a
counter that is not reset in the entire vacuum. Therefore, with
maintenance_work_mem = 64kB, once we collect at least one lpdead item,
we perform a cycle of index vacuuming and heap vacuuming for every
subsequent block even if they don't have a lpdead item. I think we
should use vacrel->dead_items_info->num_items instead.

Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com



pgsql-hackers by date:

Previous
From: Peter Smith
Date:
Subject: Re: Parallel heap vacuum
Next
From: Dilip Kumar
Date:
Subject: Re: Add an option to skip loading missing publication to avoid logical replication failure