Tom Lane wrote:
> Heikki Linnakangas <heikki@enterprisedb.com> writes:
>> Tom Lane wrote:
>>> buffers_to_clean = Max(buffers_used * 1.1,
>>> buffers_to_clean * 0.999);
>
>> That would be overly aggressive on a workload that's steady on average,
>> but consists of small bursts. Like this: 0 0 0 0 100 0 0 0 0 100 0 0 0 0
>> 100. You'd end up writing ~100 pages on every bgwriter round, but you
>> only need an average of 20 pages per round.
>
> No, you wouldn't be *writing* that many, you'd only be keeping that many
> *clean*; which only costs more work if any of them get re-dirtied
> between writing and use. Which is a fairly small probability if we're
> talking about a small difference in the number of buffers to keep clean.
> So I think the average number of writes is hardly different, it's just
> that the backends are far less likely to have to do any of them.
Ah, ok, I misunderstood what you were proposing. Yes, that seems like a
good algorithm then.
-- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com