Re: Decoupling antiwraparound autovacuum from special rules around auto cancellation - Mailing list pgsql-hackers

From Andres Freund
Subject Re: Decoupling antiwraparound autovacuum from special rules around auto cancellation
Date
Msg-id 20230119003718.b4lrrqonqyop2um4@awork3.anarazel.de
Whole thread Raw
In response to Re: Decoupling antiwraparound autovacuum from special rules around auto cancellation  (Peter Geoghegan <pg@bowt.ie>)
Responses Re: Decoupling antiwraparound autovacuum from special rules around auto cancellation  (Peter Geoghegan <pg@bowt.ie>)
List pgsql-hackers
Hi,

On 2023-01-18 13:45:19 -0800, Peter Geoghegan wrote:
> On Wed, Jan 18, 2023 at 1:08 PM Andres Freund <andres@anarazel.de> wrote:
> > I suggested nearby to only have ANALYZE dead_tuples it if there's been no
> > [auto]vacuum since the stats entry was created. That allows recovering from
> > stats resets, be it via crashes or explicitly. What do you think?
>
> I like that idea. It's far simpler than the kind of stuff I was
> thinking about, and probably just as effective. Even if it introduces
> some unforeseen problem (which seems unlikely), we can still rely on
> pgstat_report_vacuum() to set things straight before too long.
>
> Are you planning on writing a patch for this? I'd be very interested
> in seeing this through. Could definitely review it.

I can, it should be just about trivial code-wise. A bit queasy about trying to
forsee the potential consequences.


A somewhat related issue is that pgstat_report_vacuum() sets dead_tuples to
what VACUUM itself observed, ignoring any concurrently reported dead
tuples. As far as I can tell, when vacuum takes a long time, that can lead to
severely under-accounting dead tuples.

We probably loose track of a bit more than 50% of the dead tuples reported
since vacuum started. During the heap scan phase we don't notice all the
tuples reported before the current scan point, and then we don't notice them
at all during the index/heap vacuuming.

The only saving grace is that it'll be corrected at the next VACUUM. But the
next vacuum might very well be delayed noticably due to this.


This is similar to the issue around ins_since_vacuum that Peter pointed out.


I wonder if we ought to remember the dead_tuples value at the start of the
heap scan and use that to adjust the final dead_tuples value. I'd lean towards
over-counting rather than under-counting and thus probably would go for
something like

  tabentry->dead_tuples = livetuples + Min(0, tabentry->dead_tuples - deadtuples_at_start);

i.e. assuming we might have missed all concurrently reported dead tuples.




Of course we could instead move to something like ins_since_vacuum and reset
it at the *start* of the vacuum. But that'd make the error case harder,
without giving us more accuracy, I think?


I do think this is an argument for splitting up dead_tuples into separate
"components" that we track differently. I.e. tracking the number of dead
items, not-yet-removable rows, and the number of dead tuples reported from DML
statements via pgstats.

Greetings,

Andres Freund



pgsql-hackers by date:

Previous
From: Michael Paquier
Date:
Subject: Re: Issue with psql's create_help.pl under perlcritic
Next
From: Nathan Bossart
Date:
Subject: Re: Switching XLog source from archive to streaming when primary available