> The two alternative algorithms are similar, but have these
> differences:
> The former (option (2)) finds a constant number of dirty pages, though
> has varying search time.
This has the disadvantage of converging against 0 dirty pages.
A system that has less than maxpages dirty will write every page with
every bgwriter run.
> The latter (option (3)) has constant search
> time, yet finds a varying number of dirty pages.
This might have the disadvantage of either leaving too much for the
checkpoint or writing too many dirty pages in one run. Is writing a lot
in one run actually a problem though ? Or does the bgwriter pause
periodically while writing the pages of one run ?
If this is expressed in pages it would naturally need to be more than the
current maxpages (to accomodate for clean pages). The suggested 2% sounded
way too low for me (that leaves 98% to the checkpoint).
Also I think we are doing too frequent checkpoints with bgwriter in
place. Every 15-30 minutes should be sufficient, even for benchmarks.
We need a tuned bgwriter for this though.
Andreas