Why do standby servers not simply treat every checkpoint as a restartpoint? As I understand it, setting checkpoint_timeout and checkpoint_segments higher on a standby server effectively instruct standby servers to skip some checkpoints. Even with the same settings on both servers, the server could still choose to skip a checkpoint near the checkpoint_timeout limit due to the vagaries of time keeping (though I suppose it's very unlikely). But what could the advantage of skipping checkpoints be? Do people deliberately set hot standby machines up like this to trade a longer crash recover time for lower write IO?
When a hot standby server is initially being set up using a rather old base backup and an archive directory, it could be applying WAL at a very high rate such that it would replay master checkpoints multiple times a second (when the master has long periods with little write activity and has checkpoints driven by timeouts during those periods). Actually doing restartpoints that often could be annoying. Presumably there would be few dirty buffers to write out, since each checkpoint saw little activity, but you would still have to circle the shared_buffers twice, and fsync whichever files did happen to get some changes.