Greg Stark <gsstark@mit.edu> writes:
> For any benchmarking to be meaningful you have to set the checkpoint interval
> to something more realistic. Something like 5 minutes. That way when the final
> checkpoint cycle isn't completely included in the timing data you'll at least
> be missing a statistically insignificant portion of the work.
This isn't about benchmarking --- or at least, I don't put any stock in
the average NOTPM values for the long-checkpoint-interval runs. What we
want to understand is why there's a checkpoint-triggered performance
dropoff that (appears to) last longer than the checkpoint itself. If
we can fix that, it should have beneficial impact on real-world cases.
But we do not have to, and should not, restrict ourselves to real-world
test cases while trying to figure out what's going on.
regards, tom lane