Re: cost delay brainstorming - Mailing list pgsql-hackers

From Andres Freund
Subject Re: cost delay brainstorming
Date
Msg-id 20240618203238.3wqhu722ozyux3do@awork3.anarazel.de
Whole thread Raw
In response to Re: cost delay brainstorming  (Nathan Bossart <nathandbossart@gmail.com>)
Responses Re: cost delay brainstorming
List pgsql-hackers
Hi,

On 2024-06-18 13:50:46 -0500, Nathan Bossart wrote:
> Have we ruled out further adjustments to the cost parameters as a first
> step?

I'm not against that, but I it doesn't address the issue that with the current
logic one set of values just isn't going to fit a 60MB that's allowed to burst
to 100 iops and a 60TB database that has multiple 1M iops NVMe drives.


That said, the fact that vacuum_cost_page_hit is 1 and vacuum_cost_page_miss
is 2 just doesn't make much sense aesthetically. There's a far bigger
multiplier in actual costs than that...



> If you are still recommending that folks raise it and never recommending
> that folks lower it, ISTM that our defaults might still not be in the right
> ballpark.  The autovacuum_vacuum_cost_delay adjustment you reference (commit
> cbccac3) is already 5 years old, so maybe it's worth another look.

Adjusting cost delay much lower doesn't make much sense imo. It's already only
2ms on a 1ms granularity variable.  We could increase the resolution, but
sleeping for much shorter often isn't that cheap (you need to set up hardware
timers all the time and due to the short time they can't be combined with
other timers) and/or barely gives time to switch to other tasks.


So we'd have to increase the cost limit.


Greetings,

Andres Freund



pgsql-hackers by date:

Previous
From: Andres Freund
Date:
Subject: Re: Xact end leaves CurrentMemoryContext = TopMemoryContext
Next
From: Andrew Dunstan
Date:
Subject: Re: IPC::Run accepts bug reports