Re: New IndexAM API controlling index vacuum strategies - Mailing list pgsql-hackers

From Zhihong Yu
Subject Re: New IndexAM API controlling index vacuum strategies
Date
Msg-id CALNJ-vT_8SVeodOngG3mxLik4fS7uD6Y+zsHx-6nuAL8eM4bDw@mail.gmail.com
Whole thread Raw
In response to Re: New IndexAM API controlling index vacuum strategies  (Masahiko Sawada <sawada.mshk@gmail.com>)
Responses Re: New IndexAM API controlling index vacuum strategies
List pgsql-hackers
Hi, Masahiko-san:

For v2-0001-Introduce-IndexAM-API-for-choosing-index-vacuum-s.patch :

For blvacuumstrategy():

+   if (params->index_cleanup == VACOPT_TERNARY_DISABLED)
+       return INDEX_VACUUM_STRATEGY_NONE;
+   else
+       return INDEX_VACUUM_STRATEGY_BULKDELETE;

The 'else' can be omitted.

Similar comment for ginvacuumstrategy().

For v2-0002-Choose-index-vacuum-strategy-based-on-amvacuumstr.patch :

If index_cleanup option is specified neither VACUUM command nor
storage option

I think this is what you meant (by not using passive voice):

If index_cleanup option specifies neither VACUUM command nor
storage option,

- * integer, but you can't fit that many items on a page. 11 ought to be more
+ * integer, but you can't fit that many items on a page. 13 ought to be more

It would be nice to add a note why the number of bits is increased.

For choose_vacuum_strategy():

+       IndexVacuumStrategy ivstrat;

The variable is only used inside the loop. You can use vacrelstats->ivstrategies[i] directly and omit the variable.

+       int ndeaditems_limit = (int) ((freespace / sizeof(ItemIdData)) * 0.7);

How was the factor of 0.7 determined ? Comment below only mentioned 'safety factor' but not how it was chosen.
I also wonder if this factor should be exposed as GUC.

+   if (nworkers > 0)
+       nworkers--;

Should log / assert be added when nworkers is <= 0 ?

+ * XXX: allowing to fill the heap page with only line pointer seems a overkill.

'a overkill' -> 'an overkill'

Cheers

On Sun, Jan 17, 2021 at 10:21 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:
On Mon, Jan 18, 2021 at 2:18 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:
>
> On Tue, Jan 5, 2021 at 10:35 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:
> >
> > On Tue, Dec 29, 2020 at 3:25 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:
> > >
> > > On Tue, Dec 29, 2020 at 7:06 AM Peter Geoghegan <pg@bowt.ie> wrote:
> > > >
> > > > On Sun, Dec 27, 2020 at 11:41 PM Peter Geoghegan <pg@bowt.ie> wrote:
> > > > > I experimented with this today, and I think that it is a good way to
> > > > > do it. I like the idea of choose_vacuum_strategy() understanding that
> > > > > heap pages that are subject to many non-HOT updates have a "natural
> > > > > extra capacity for LP_DEAD items" that it must care about directly (at
> > > > > least with non-default heap fill factor settings). My early testing
> > > > > shows that it will often take a surprisingly long time for the most
> > > > > heavily updated heap page to have more than about 100 LP_DEAD items.
> > > >
> > > > Attached is a rough patch showing what I did here. It was applied on
> > > > top of my bottom-up index deletion patch series and your
> > > > poc_vacuumstrategy.patch patch. This patch was written as a quick and
> > > > dirty way of simulating what I thought would work best for bottom-up
> > > > index deletion for one specific benchmark/test, which was
> > > > non-hot-update heavy. This consists of a variant pgbench with several
> > > > indexes on pgbench_accounts (almost the same as most other bottom-up
> > > > deletion benchmarks I've been running). Only one index is "logically
> > > > modified" by the updates, but of course we still physically modify all
> > > > indexes on every update. I set fill factor to 90 for this benchmark,
> > > > which is an important factor for how your VACUUM patch works during
> > > > the benchmark.
> > > >
> > > > This rough supplementary patch includes VACUUM logic that assumes (but
> > > > doesn't check) that the table has heap fill factor set to 90 -- see my
> > > > changes to choose_vacuum_strategy(). This benchmark is really about
> > > > stability over time more than performance (though performance is also
> > > > improved significantly). I wanted to keep both the table/heap and the
> > > > logically unmodified indexes (i.e. 3 out of 4 indexes on
> > > > pgbench_accounts) exactly the same size *forever*.
> > > >
> > > > Does this make sense?
> > >
> > > Thank you for sharing the patch. That makes sense.
> > >
> > > +        if (!vacuum_heap)
> > > +        {
> > > +            if (maxdeadpage > 130 ||
> > > +                /* Also check if maintenance_work_mem space is running out */
> > > +                vacrelstats->dead_tuples->num_tuples >
> > > +                vacrelstats->dead_tuples->max_tuples / 2)
> > > +                vacuum_heap = true;
> > > +        }
> > >
> > > The second test checking if maintenane_work_mem space is running out
> > > also makes sense to me. Perhaps another idea would be to compare the
> > > number of collected garbage tuple to the total number of heap tuples
> > > so that we do lazy_vacuum_heap() only when we’re likely to reclaim a
> > > certain amount of garbage in the table.
> > >
> > > >
> > > > Anyway, with a 15k TPS limit on a pgbench scale 3000 DB, I see that
> > > > pg_stat_database shows an almost ~28% reduction in blks_read after an
> > > > overnight run for the patch series (it was 508,820,699 for the
> > > > patches, 705,282,975 for the master branch). I think that the VACUUM
> > > > component is responsible for some of that reduction. There were 11
> > > > VACUUMs for the patch, 7 of which did not call lazy_vacuum_heap()
> > > > (these 7 VACUUM operations all only dead a btbulkdelete() call for the
> > > > one problematic index on the table, named "abalance_ruin", which my
> > > > supplementary patch has hard-coded knowledge of).
> > >
> > > That's a very good result in terms of skipping lazy_vacuum_heap(). How
> > > much the table and indexes bloated? Also, I'm curious about that which
> > > tests in choose_vacuum_strategy() turned vacuum_heap on: 130 test or
> > > test if maintenance_work_mem space is running out? And what was the
> > > impact on clearing all-visible bits?
> > >
> >
> > I merged these patches and polished it.
> >
> > In the 0002 patch, we calculate how many LP_DEAD items can be
> > accumulated in the space on a single heap page left by fillfactor. I
> > increased MaxHeapTuplesPerPage so that we can accumulate LP_DEAD items
> > on a heap page. Because otherwise accumulating LP_DEAD items
> > unnecessarily constrains the number of heap tuples in a single page,
> > especially when small tuples, as I mentioned before. Previously, we
> > constrained the number of line pointers to avoid excessive
> > line-pointer bloat and not require an increase in the size of the work
> > array. However, once amvacuumstrategy stuff entered the picture,
> > accumulating line pointers has value. Also, we might want to store the
> > returned value of amvacuumstrategy so that index AM can refer to it on
> > index-deletion.
> >
> > The 0003 patch has btree indexes skip bulk-deletion if the index
> > doesn't grow since last bulk-deletion. I stored the number of blocks
> > in the meta page but didn't implement meta page upgrading.
> >
>
> After more thought, I think that ambulkdelete needs to be able to
> refer the answer to amvacuumstrategy. That way, the index can skip
> bulk-deletion when lazy vacuum doesn't vacuum heap and it also doesn’t
> want to do that.
>
> I’ve attached the updated version patch that includes the following changes:
>
> * Store the answers to amvacuumstrategy into either the local memory
> or DSM (in parallel vacuum case) so that ambulkdelete can refer the
> answer to amvacuumstrategy.
> * Fix regression failures.
> * Update the documentation and commments.
>
> Note that 0003 patch is still PoC quality, lacking the btree meta page
> version upgrade.

Sorry, I missed 0002 patch. I've attached the patch set again.

Regards,

--
Masahiko Sawada
EDB:  https://www.enterprisedb.com/

pgsql-hackers by date:

Previous
From: Robert Haas
Date:
Subject: Re: recovering from "found xmin ... from before relfrozenxid ..."
Next
From: Heikki Linnakangas
Date:
Subject: Re: ResourceOwner refactoring