Re: last_archived_wal is not necessary the latest WAL file (was Re: pgsql: Add test case for an archive recovery corner case.) - Mailing list pgsql-hackers
From | Noah Misch |
---|---|
Subject | Re: last_archived_wal is not necessary the latest WAL file (was Re: pgsql: Add test case for an archive recovery corner case.) |
Date | |
Msg-id | 20220627070457.GA2176699@rfd.leadboat.com Whole thread Raw |
In response to | Re: last_archived_wal is not necessary the latest WAL file (was Re: pgsql: Add test case for an archive recovery corner case.) (Heikki Linnakangas <hlinnaka@iki.fi>) |
Responses |
Re: last_archived_wal is not necessary the latest WAL file (was Re: pgsql: Add test case for an archive recovery corner case.)
|
List | pgsql-hackers |
On Wed, Feb 16, 2022 at 01:42:27AM +0200, Heikki Linnakangas wrote: > On 15/02/2022 23:28, Tom Lane wrote: > >Heikki Linnakangas <hlinnaka@iki.fi> writes: > >>That was interesting: the order that WAL segments are archived when a > >>standby is promoted is not fully deterministic. > > > >Oh, of course. > > > >>I find it a bit surprising that pg_stat_archiver.last_archived_wal is > >>not necessarily the highest-numbered segment that was archived. I > >>propose that we mention that in the docs, as in the attached patch. > > > >+1, but I think the description of that field in the pg-stat-archiver-view > >table is also pretty misleading. Maybe like > > > >- Name of the last WAL file successfully archived > >+ Name of the WAL file most recently successfully archived > > > >and similarly s/last/most recent/ for the other fields claiming > >to be "last" something. > > Makes sense, committed it that way. This has seen two failures like the following: # Failed test 'check table contents after point-in-time recovery' # at t/028_pitr_timelines.pl line 167. # got: '2' # expected: '3' # Looks like you failed 1 test of 3. sysname │ snapshot │ branch │ bfurl ──────────┼─────────────────────┼────────┼────────────────────────────────────────────────────────────────────────────────────────────── sungazer │ 2022-03-08 16:24:46 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer&dt=2022-03-08%2016%3A24%3A46 mylodon │ 2022-05-18 00:14:19 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2022-05-18%2000%3A14%3A19 For me, it reproduces consistently with a sleep just before the startup process exits: --- a/src/backend/postmaster/startup.c +++ b/src/backend/postmaster/startup.c @@ -266,6 +266,8 @@ StartupProcessMain(void) */ StartupXLOG(); + pg_usleep(3 * 1000 * 1000); + /* * Exit normally. Exit code 0 tells postmaster that we completed recovery * successfully. A normal/passing run gets this sequence of events: checkpointer: end-of-recovery checkpoint startup process: exits postmaster reaper(): "Startup succeeded, commence normal operations" = StartArchiver() test issues its INSERT shutdown checkpoint checkpointer: exits postmaster reaper(): signal_child(PgArchPID, SIGUSR2) archiver: completes archiving, exits However, if the startup process exit is slow enough, you get: checkpointer: end-of-recovery checkpoint test issues its INSERT shutdown checkpoint checkpointer: exits postmaster reaper(): no PgArchPID, skip that part [no archiving] One can adapt the test to the server behavior by having the test wait for the archiver to start, as attached. This is sufficient to make check-world pass with the above sleep in place. I think we should also modify the PostgresNode archive_command to log a message. That lack of logging was a obstacle upthread (as seen in commit 3279cef) and again here. An alternative would be to declare that the test is right and the server is wrong. The postmaster knows how to start the checkpointer if the checkpointer is not running when the postmaster needs a shutdown checkpoint. It could start the archiver around that same area: /* Start the checkpointer if not running */ if (CheckpointerPID == 0) CheckpointerPID = StartCheckpointer(); /* And tell it to shut down */ if (CheckpointerPID != 0) { signal_child(CheckpointerPID, SIGUSR2); pmState = PM_SHUTDOWN; } Any opinions between the change-test and change-server approaches? The test is new in HEAD, but I get the same behavior in v14 and v13. In v12, node_pitr2 never exits pg_is_in_recovery(). v11 would need test code changes, which I did not attempt. Thanks, nm
Attachment
pgsql-hackers by date: