Re: Decoupling antiwraparound autovacuum from special rules around auto cancellation - Mailing list pgsql-hackers

From Andres Freund
Subject Re: Decoupling antiwraparound autovacuum from special rules around auto cancellation
Date
Msg-id 20230119205657.5tiv4t75575ovt57@awork3.anarazel.de
Whole thread Raw
In response to Re: Decoupling antiwraparound autovacuum from special rules around auto cancellation  (Robert Haas <robertmhaas@gmail.com>)
Responses Re: Decoupling antiwraparound autovacuum from special rules around auto cancellation
List pgsql-hackers
Hi,

On 2023-01-19 15:12:12 -0500, Robert Haas wrote:
> On Wed, Jan 18, 2023 at 1:31 PM Peter Geoghegan <pg@bowt.ie> wrote:
> > pgstat_report_analyze() will totally override the
> > tabentry->dead_tuples information that drives autovacuum.c, based on
> > an estimate derived from a random sample -- which seems to me to be an
> > approach that just doesn't have any sound theoretical basis.
> 
> In other words, ANALYZE sometimes (but not always) produces wrong answers.

For dead tuples, but not live tuples.


> On Wed, Jan 18, 2023 at 4:08 PM Andres Freund <andres@anarazel.de> wrote:
> > One complicating factor is that VACUUM sometimes computes an incrementally
> > more bogus n_live_tup when it skips pages due to the VM, whereas ANALYZE
> > computes something sane. I unintentionally encountered one when I was trying
> > something while writing this email, reproducer attached.
> 
> In other words, VACUUM sometimes (but not always) produces wrong answers.

For live tuples, but not badly so for dead tuples.


> TL;DR: We're screwed.

We are, but perhaps not too badly so, because we can choose to believe analyze
more for live tuples, and vacuum for dead tuples. Analyze doesn't compute
reltuples incrementally and vacuum doesn't compute deadtuples incrementally.



> I refuse to believe that any amount of math you can do on numbers that
> can be arbitrarily inaccurate will result in an accurate answer
> popping out the other end. Trying to update the reltuples estimate
> incrementally based on an estimate derived from a non-random,
> likely-to-be-skewed subset of the table is always going to produce
> distortion that gets worse and worse the more times you do it. If
> could say, well, the existing estimate of let's say 100 tuples per
> page is based on the density being 200 tuples per page in the pages I
> just scanned and 50 tuples per page in the rest of the table, then you
> could calculate a new estimate that keeps the value of 50 tuples per
> page for the remainder of the table intact and just replaces the
> estimate for the part you just scanned. But we have no way of doing
> that, so we just make some linear combination of the old estimate with
> the new one. That overweights the repeatedly-sampled portions of the
> table more and more, making the estimate wronger and wronger.

Perhaps we should, at least occasionally, make vacuum do a cheaper version of
analyze's sampling to compute an updated reltuples. This could even happen
during the heap scan phase.

I don't like relying on analyze to fix vacuum's bogus reltuples, because
there's nothing forcing an analyze run soon after vacuum [incrementally]
screwed it up. Vacuum can be forced to run a lot of times due to xid horizons
preventing cleanup, after which there isn't anything forcing analyze to run
again.

But in contrast to dead_tuples, where I think we can just stop analyze from
updating it unless we crashed recently, I do think we need to update reltuples
in vacuum. So computing an accurate value seems like the least unreasonable
thing I can see.

Greetings,

Andres Freund



pgsql-hackers by date:

Previous
From: Bruce Momjian
Date:
Subject: Re: document the need to analyze partitioned tables
Next
From: Andres Freund
Date:
Subject: Re: Decoupling antiwraparound autovacuum from special rules around auto cancellation