Re: New GUC autovacuum_max_threshold ? - Mailing list pgsql-hackers

From Frédéric Yhuel
Subject Re: New GUC autovacuum_max_threshold ?
Date
Msg-id cc23c226-e0be-47a7-bf6f-bcedd097a239@dalibo.com
Whole thread Raw
In response to Re: New GUC autovacuum_max_threshold ?  (Robert Haas <robertmhaas@gmail.com>)
Responses Re: New GUC autovacuum_max_threshold ?
List pgsql-hackers

Le 09/05/2024 à 16:58, Robert Haas a écrit :
> As I see it, a lot of the lack of agreement up until now is people
> just not understanding the math. Since I think I've got the right idea
> about the math, I attribute this to other people being confused about
> what is going to happen and would tend to phrase it as: some people
> don't understand how catastrophically bad it will be if you set this
> value too low.

FWIW, I do agree with your math. I found your demonstration convincing. 
500000 was selected with the wet finger.

Using the formula I suggested earlier:

vacthresh = Min(vac_base_thresh + vac_scale_factor * reltuples, 
vac_base_thresh + vac_scale_factor * sqrt(reltuples) * 1000);

your table of 2.56 billion tuples will be vacuumed if there are
more than 10 million dead tuples (every 28 minutes).

If we want to stick with the simple formula, we should probably choose a 
very high default, maybe 100 million, as you suggested earlier.

However, it would be nice to have the visibility map updated more 
frequently than every 100 million dead tuples. I wonder if this could be 
decoupled from the vacuum process?



pgsql-hackers by date:

Previous
From: Robert Haas
Date:
Subject: Re: cataloguing NOT NULL constraints
Next
From: Nathan Bossart
Date:
Subject: Re: An improved README experience for PostgreSQL