Thread: AW: WAL & RC1 status
> Since there is not a separate WAL version stamp, introducing one now > would certainly force an initdb. I don't mind adding one if you think > it's useful; another 4 bytes in pg_control won't hurt anything. But > it's not going to save anyone's bacon on this cycle. Yes, if initdb, that would probably be a good idea. Imho the initdb now is not a real issue, since all beta testers know that for serious issues there might be an initdb after beta started. > At least one of my concerns (single point of failure) would require a > change to the layout of pg_control, which would force initdb anyway. Was that the "only one checkpoint back in time in pg_control" issue ? One issue about too many checkpoints in pg_control, is that you then need to keep more logs, and in my pgbench tests the log space was a real issue, even for the one checkpoint case. I think a utility to recreate a busted pg_control would add a lot more stability, than one more checkpoint in pg_control. We should probably have additional criteria to time, that can trigger a checkpoint, like N logs filled since last checkpoint. I do not think reducing the checkpoint interval is a solution for once in a while heavy activity. Andreas
Zeugswetter Andreas SB <ZeugswetterA@Wien.Spardat.at> writes: >> At least one of my concerns (single point of failure) would require a >> change to the layout of pg_control, which would force initdb anyway. > Was that the "only one checkpoint back in time in pg_control" issue ? Yes. > One issue about too many checkpoints in pg_control, is that you then > need to keep more logs, and in my pgbench tests the log space was a > real issue, even for the one checkpoint case. I think a utility to > recreate a busted pg_control would add a lot more stability, than one > more checkpoint in pg_control. Well, there is a big difference between 1 and 2 checkpoints stored in pg_control. I don't intend to go further than 2. But I disagree about a log-reset utility being more useful than an extra checkpoint. The utility would be for manual recovery after a disaster, and it wouldn't offer 100% recovery: you couldn't be sure that the last few transactions had been applied atomically, ie, all or none. (Perhaps pg_log got updated to show them committed, but not all of their tuple changes made it to disk; how will you know?) If you can back up to the prior checkpoint and then roll forward, you *do* have a shot at guaranteeing a consistent database state after loss of the primary checkpoint. > We should probably have additional criteria to time, that can trigger a > checkpoint, like N logs filled since last checkpoint. Perhaps. I don't have time to work on that now, but we can certainly improve the strategy in future releases. regards, tom lane
> Zeugswetter Andreas SB <ZeugswetterA@Wien.Spardat.at> writes: > >> At least one of my concerns (single point of failure) would require a > >> change to the layout of pg_control, which would force initdb anyway. > > > Was that the "only one checkpoint back in time in pg_control" issue ? > > Yes. Is changing pg_control the thing that is going to require the initdb? -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 853-3000+ If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania19026