Hi
frederic.yhuel
> Thank you. FWIW, I would prefer a sub-linear growth, so maybe something
> like this
> vacthresh = Min(vac_base_thresh + vac_scale_factor * reltuples,
> vac_base_thresh + vac_scale_factor * pow(reltuples, 0.7) * 100);
> This would give :
> * 386M (instead of 5.1 billion currently) for a 25.6 billion tuples table ;
> * 77M for a 2.56 billion tuples table (Robert's example) ;
> * 15M (instead of 51M currently) for a 256M tuples table ;
> * 3M (instead of 5M currently) for a 25.6M tuples table.
> The other advantage is that you don't need another GUC.
Argee ,We just need to change the calculation formula,But I prefer this formula because it calculates a smoother value.
vacthresh = (float4) fmin(vac_base_thresh + vac_scale_factor * reltuples,vac_base_thresh + vac_scale_factor * log2(reltuples) * 10000);
or
vacthresh = (float4) fmin(vac_base_thresh + (vac_scale_factor * reltuples) , sqrt(1000.0 * reltuples));
On 8/7/24 23:39, Nathan Bossart wrote:
> I've attached a new patch to show roughly what I think this new GUC should
> look like. I'm hoping this sparks more discussion, if nothing else.
>
Thank you. FWIW, I would prefer a sub-linear growth, so maybe something
like this:
vacthresh = Min(vac_base_thresh + vac_scale_factor * reltuples,
vac_base_thresh + vac_scale_factor * pow(reltuples, 0.7) * 100);
This would give :
* 386M (instead of 5.1 billion currently) for a 25.6 billion tuples table ;
* 77M for a 2.56 billion tuples table (Robert's example) ;
* 15M (instead of 51M currently) for a 256M tuples table ;
* 3M (instead of 5M currently) for a 25.6M tuples table.
The other advantage is that you don't need another GUC.
> On Tue, Jun 18, 2024 at 12:36:42PM +0200, Frédéric Yhuel wrote:
>> By the way, I wonder if there were any off-list discussions after Robert's
>> conference at PGConf.dev (and I'm waiting for the video of the conf).
>
> I don't recall any discussions about this idea, but Robert did briefly
> mention it in his talk [0].
>
> [0] https://www.youtube.com/watch?v=RfTD-Twpvac
>
Very interesting, thanks!