Thread: Add basic tests for the low-level backup method.
Hackers, As promised in [1], attached are some basic tests for the low-level backup method. There are currently no tests for the low-level backup method. pg_backup_start() and pg_backup_stop() are called but not exercised in any real fashion. There is a lot more that can be done, but this at least supplies some basic tests and provides a place for future improvement. Regards, -David [1] https://www.postgresql.org/message-id/2daf8adc-8db7-4204-a7f2-a7e94e2bfa4b%40pgmasters.net
Attachment
On 11/12/23 08:21, David Steele wrote: > > As promised in [1], attached are some basic tests for the low-level > backup method. Added to the 2024-03 CF. Regards, -David
On Thu, Feb 29, 2024 at 10:30:52AM +1300, David Steele wrote: > On 11/12/23 08:21, David Steele wrote: >> As promised in [1], attached are some basic tests for the low-level >> backup method. > > Added to the 2024-03 CF. There is already 040_standby_failover_slots_sync.pl in recovery/ that uses the number of your test script. You may want to bump it, that's a nit. +unlink("$backup_dir/postmaster.pid") + or BAIL_OUT("unable to unlink $backup_dir/postmaster.pid"); +unlink("$backup_dir/postmaster.opts") + or BAIL_OUT("unable to unlink $backup_dir/postmaster.opts"); +unlink("$backup_dir/global/pg_control") + or BAIL_OUT("unable to unlink $backup_dir/global/pg_control"); RELCACHE_INIT_FILENAME as well? +# Rather than writing out backup_label, try to recover the backup without +# backup_label to demonstrate that recovery will not work correctly without it, +# i.e. the canary table will be missing and the cluster will be corrupt. Provide +# only the WAL segment that recovery will think it needs. Okay, why not. No objections to this addition. I am a bit surprised that this is not scanning some of the logs produced by the startup process for particular patterns. +# Save backup_label into the backup directory and recover using the primary's +# archive. This time recovery will succeed and the canary table will be +# present. Here are well, I think that we should add some log_contains() with pre-defined patterns to show that recovery has completed the way we want it with a backup_label up to the end-of-backup record. -- Michael
Attachment
On 2/29/24 16:55, Michael Paquier wrote: > On Thu, Feb 29, 2024 at 10:30:52AM +1300, David Steele wrote: >> On 11/12/23 08:21, David Steele wrote: >>> As promised in [1], attached are some basic tests for the low-level >>> backup method. >> >> Added to the 2024-03 CF. > > There is already 040_standby_failover_slots_sync.pl in recovery/ that > uses the number of your test script. You may want to bump it, that's > a nit. Renamed to 042_low_level_backup.pl. > +unlink("$backup_dir/postmaster.pid") > + or BAIL_OUT("unable to unlink $backup_dir/postmaster.pid"); > +unlink("$backup_dir/postmaster.opts") > + or BAIL_OUT("unable to unlink $backup_dir/postmaster.opts"); > +unlink("$backup_dir/global/pg_control") > + or BAIL_OUT("unable to unlink $backup_dir/global/pg_control"); > > RELCACHE_INIT_FILENAME as well? I'm not trying to implement the full exclusion list here, just enough to get the test working. Since exclusions are optional according to the docs I don't think we need them for a valid tests. > +# Rather than writing out backup_label, try to recover the backup without > +# backup_label to demonstrate that recovery will not work correctly without it, > +# i.e. the canary table will be missing and the cluster will be corrupt. Provide > +# only the WAL segment that recovery will think it needs. > > Okay, why not. No objections to this addition. I am a bit surprised > that this is not scanning some of the logs produced by the startup > process for particular patterns. Not sure what to look for here. There are no distinct messages for crash recovery. Perhaps there should be? > +# Save backup_label into the backup directory and recover using the primary's > +# archive. This time recovery will succeed and the canary table will be > +# present. > > Here are well, I think that we should add some log_contains() with > pre-defined patterns to show that recovery has completed the way we > want it with a backup_label up to the end-of-backup record. Sure, I added a check for the new log message when recovering with a backup_label. Regards, -David
Attachment
On Wed, Mar 13, 2024 at 01:12:28PM +1300, David Steele wrote: > On 2/29/24 16:55, Michael Paquier wrote: >> +unlink("$backup_dir/postmaster.pid") >> + or BAIL_OUT("unable to unlink $backup_dir/postmaster.pid"); >> +unlink("$backup_dir/postmaster.opts") >> + or BAIL_OUT("unable to unlink $backup_dir/postmaster.opts"); >> +unlink("$backup_dir/global/pg_control") >> + or BAIL_OUT("unable to unlink $backup_dir/global/pg_control"); >> >> RELCACHE_INIT_FILENAME as well? > > I'm not trying to implement the full exclusion list here, just enough to get > the test working. Since exclusions are optional according to the docs I > don't think we need them for a valid tests. Okay. Fine by me at the end. >> +# Rather than writing out backup_label, try to recover the backup without >> +# backup_label to demonstrate that recovery will not work correctly without it, >> +# i.e. the canary table will be missing and the cluster will be corrupt. Provide >> +# only the WAL segment that recovery will think it needs. >> >> Okay, why not. No objections to this addition. I am a bit surprised >> that this is not scanning some of the logs produced by the startup >> process for particular patterns. > > Not sure what to look for here. There are no distinct messages for crash > recovery. Perhaps there should be? The closest thing I can think of here would be "database system was not properly shut down; automatic recovery in progress" as we don't have InArchiveRecovery, after checking that the canary is missing. If you don't like this suggestion, feel free to say so, of course :) >> +# Save backup_label into the backup directory and recover using the primary's >> +# archive. This time recovery will succeed and the canary table will be >> +# present. >> >> Here are well, I think that we should add some log_contains() with >> pre-defined patterns to show that recovery has completed the way we >> want it with a backup_label up to the end-of-backup record. > > Sure, I added a check for the new log message when recovering with a > backup_label. +ok($node_replica->log_contains('completed backup recovery with redo LSN'), + 'verify backup recovery performed with backup_label'); Okay for this choice. I was thinking first about "starting backup recovery with redo LSN", closer to the area where the backup_label is read. -- Michael
Attachment
On 3/13/24 19:15, Michael Paquier wrote: > On Wed, Mar 13, 2024 at 01:12:28PM +1300, David Steele wrote: >> >> Not sure what to look for here. There are no distinct messages for crash >> recovery. Perhaps there should be? > > The closest thing I can think of here would be "database system was > not properly shut down; automatic recovery in progress" as we don't > have InArchiveRecovery, after checking that the canary is missing. If > you don't like this suggestion, feel free to say so, of course :) That works for me. I think I got it confused with "database system was interrupted..." when I was looking at the success vs. fail logs. >> Sure, I added a check for the new log message when recovering with a >> backup_label. > > +ok($node_replica->log_contains('completed backup recovery with redo LSN'), > + 'verify backup recovery performed with backup_label'); > > Okay for this choice. I was thinking first about "starting backup > recovery with redo LSN", closer to the area where the backup_label is > read. I think you are right that the start message is better since it can only appear once when the backup_label is found. The completed message could in theory appear after a restart, though the backup_label must have been found at some point. Regards, -David
Attachment
On Thu, Mar 14, 2024 at 09:12:52AM +1300, David Steele wrote: > I think you are right that the start message is better since it can only > appear once when the backup_label is found. The completed message could in > theory appear after a restart, though the backup_label must have been found > at some point. So, I've given a try to this patch with 99b4a63bef94, to note that sidewinder failed because of a timing issue on Windows: the recovery of the node without backup_label, expected to fail, would try to backup the last segment it has replayed, because, as it has no backup_label, it behaves like the primary. It would try to use the same archive location as the primary, leading to a conflict failure on Windows. This one was easy to fix, by overwritting postgresql.conf on the node to not do archiving. Following that, I've noticed a second race condition: we don't wait for the segment after pg_switch_wal() to be archived. This one can be easily avoided with a poll on pg_stat_archiver. After that, also because I've initially managed to, cough, forget an update of meson.build to list the new test, I've noticed a third failure on Windows for the case of the node that has a backup_label. Here is one of the failures: https://cirrus-ci.com/task/5245341683941376 regress_log_042_low_level_backup and 042_low_level_backup_replica_success.log have all the information needed, that can be summarized like that: The system cannot find the file specified. 2024-03-14 06:02:37.670 GMT [560][startup] FATAL: invalid data in file "backup_label" The first message is something new to me, that seems to point to a corruption failure of the file system. Why don't we see something similar in other tests, then? Leaving that aside.. The second LOG is something that can be acted on. I've added some debugging to the parsing of the backup_label file in the backend, and noticed that the first fscanf() for START WAL LOCATION is failing because the last %c is detected as \r rather than \n. Tweaking the contents stored from pg_backend_stop() with a sed won't help, because the issue is that we write the CRLFs with append_to_file, and the startup process cannot cope with that. The simplest method I can think of is to use binmode, as of the attached. It is the first time that we'd take the contents received from a BackgroundPsql and write them to a file parsed by the backend, so perhaps we should try to do that in a more general way, but I'm not sure how, tbh, and the case of this test is special while adding handling for \r when reading the backup_label got discussed in the past but we were OK with what we are doing now on HEAD. On top of all that, note that I have removed remove_tree as I am not sure if this would be OK in all the buildfarm animals, added a quit() for BackgroundPsql, moved queries to use less BackgroundPsql, as well as a few other things like avoiding the hardcoded segment names. meson.build is.. Cough.. Updated now. I am attaching an updated patch with all that fixed, which is stable in the CI and any tests I've run. Do you have any comments about that? -- Michael
Attachment
On 3/14/24 20:00, Michael Paquier wrote: > On Thu, Mar 14, 2024 at 09:12:52AM +1300, David Steele wrote: >> I think you are right that the start message is better since it can only >> appear once when the backup_label is found. The completed message could in >> theory appear after a restart, though the backup_label must have been found >> at some point. > > So, I've given a try to this patch with 99b4a63bef94, to note that > sidewinder failed because of a timing issue on Windows: the recovery > of the node without backup_label, expected to fail, would try to > backup the last segment it has replayed, because, as it has no > backup_label, it behaves like the primary. It would try to use the > same archive location as the primary, leading to a conflict failure on > Windows. This one was easy to fix, by overwritting postgresql.conf on > the node to not do archiving. Hmmm, I wonder why this did not show up in the Windows tests on CI? > Following that, I've noticed a second race condition: we don't wait > for the segment after pg_switch_wal() to be archived. This one can be > easily avoided with a poll on pg_stat_archiver. Ugh, yeah, good change. > After that, also because I've initially managed to, cough, forget an > update of meson.build to list the new test, I've noticed a third > failure on Windows for the case of the node that has a backup_label. > Here is one of the failures: > https://cirrus-ci.com/task/5245341683941376 Is the missing test in meson the reason we did not see test failures for Windows in CI? > regress_log_042_low_level_backup and > 042_low_level_backup_replica_success.log have all the information > needed, that can be summarized like that: > The system cannot find the file specified. > 2024-03-14 06:02:37.670 GMT [560][startup] FATAL: invalid data in file "backup_label" > > The first message is something new to me, that seems to point to a > corruption failure of the file system. Why don't we see something > similar in other tests, then? Leaving that aside.. > > The second LOG is something that can be acted on. I've added some > debugging to the parsing of the backup_label file in the backend, and > noticed that the first fscanf() for START WAL LOCATION is failing > because the last %c is detected as \r rather than \n. Tweaking the > contents stored from pg_backend_stop() with a sed won't help, because > the issue is that we write the CRLFs with append_to_file, and the > startup process cannot cope with that. The simplest method I can > think of is to use binmode, as of the attached. Yeah, that makes sense. > It is the first time that we'd take the contents received from a > BackgroundPsql and write them to a file parsed by the backend, so > perhaps we should try to do that in a more general way, but I'm not > sure how, tbh, and the case of this test is special while adding > handling for \r when reading the backup_label got discussed in the > past but we were OK with what we are doing now on HEAD. I think it makes sense to leave the parsing code as is and make the change in the test. If we add more tests to this module we'll probably need a function to avoid repeating code. > On top of all that, note that I have removed remove_tree as I am not > sure if this would be OK in all the buildfarm animals, added a quit() > for BackgroundPsql, moved queries to use less BackgroundPsql, as well > as a few other things like avoiding the hardcoded segment names. > meson.build is.. Cough.. Updated now. OK. > I am attaching an updated patch with all that fixed, which is stable > in the CI and any tests I've run. Do you have any comments about These changes look good to me. Sure wish we had an easier to way to test commits in the build farm. Regards, -David
On Fri, Mar 15, 2024 at 09:40:38AM +1300, David Steele wrote: > Is the missing test in meson the reason we did not see test failures for > Windows in CI? The test has to be listed in src/test/recovery/meson.build or the CI would ignore it. >> The second LOG is something that can be acted on. I've added some >> debugging to the parsing of the backup_label file in the backend, and >> noticed that the first fscanf() for START WAL LOCATION is failing >> because the last %c is detected as \r rather than \n. Tweaking the >> contents stored from pg_backend_stop() with a sed won't help, because >> the issue is that we write the CRLFs with append_to_file, and the >> startup process cannot cope with that. The simplest method I can >> think of is to use binmode, as of the attached. > > Yeah, that makes sense. I am wondering if there is a better trick here that would not require changes in the backend to make the backup_label parsing more flexible, though. >> I am attaching an updated patch with all that fixed, which is stable >> in the CI and any tests I've run. Do you have any comments about > > These changes look good to me. Sure wish we had an easier to way to test > commits in the build farm. That's why these tests are not that easy, they can be racy. I've run the test 5~10 times in the CI this time to gain more confidence, and saw zero failures with the stability fixes in place including Windows. I've applied it now, as I can still monitor the buildfarm for a few more days. Let's see what happens, but that should be better. -- Michael
Attachment
On 3/15/24 12:38, Michael Paquier wrote: > On Fri, Mar 15, 2024 at 09:40:38AM +1300, David Steele wrote: >> Is the missing test in meson the reason we did not see test failures for >> Windows in CI? > > The test has to be listed in src/test/recovery/meson.build or the CI > would ignore it. Right -- I will keep this in mind for the future. >>> The second LOG is something that can be acted on. I've added some >>> debugging to the parsing of the backup_label file in the backend, and >>> noticed that the first fscanf() for START WAL LOCATION is failing >>> because the last %c is detected as \r rather than \n. Tweaking the >>> contents stored from pg_backend_stop() with a sed won't help, because >>> the issue is that we write the CRLFs with append_to_file, and the >>> startup process cannot cope with that. The simplest method I can >>> think of is to use binmode, as of the attached. >> >> Yeah, that makes sense. > > I am wondering if there is a better trick here that would not require > changes in the backend to make the backup_label parsing more flexible, > though. Well, this is what we recommend in the docs, i.e. using bin mode to save backup_label, so it seems OK to me. >>> I am attaching an updated patch with all that fixed, which is stable >>> in the CI and any tests I've run. Do you have any comments about >> >> These changes look good to me. Sure wish we had an easier to way to test >> commits in the build farm. > > That's why these tests are not that easy, they can be racy. I've run > the test 5~10 times in the CI this time to gain more confidence, and > saw zero failures with the stability fixes in place including Windows. > I've applied it now, as I can still monitor the buildfarm for a few > more days. Let's see what happens, but that should be better. At least sidewinder is happy now -- and the build farm in general as far as I can see. Thank you for your help on this! -David
On Fri, Mar 15, 2024 at 08:38:47AM +0900, Michael Paquier wrote: > That's why these tests are not that easy, they can be racy. I've run > the test 5~10 times in the CI this time to gain more confidence, and > saw zero failures with the stability fixes in place including Windows. > I've applied it now, as I can still monitor the buildfarm for a few > more days. Let's see what happens, but that should be better. So, it looks like the buildfarm is clear. sidewinder has reported a green state, and the recent runs of the CFbot across all the patches are looking stable as well on all platforms. There are still a few buildfarm members on Windows that will take time more time before running. -- Michael
Attachment
On Fri, Mar 15, 2024 at 06:23:15PM +1300, David Steele wrote: > Well, this is what we recommend in the docs, i.e. using bin mode to save > backup_label, so it seems OK to me. Indeed, I didn't notice that this is actually documented, so what I did took the right angle. French flair, perhaps.. -- Michael
Attachment
On 3/15/24 18:32, Michael Paquier wrote: > On Fri, Mar 15, 2024 at 06:23:15PM +1300, David Steele wrote: >> Well, this is what we recommend in the docs, i.e. using bin mode to save >> backup_label, so it seems OK to me. > > Indeed, I didn't notice that this is actually documented, so what I > did took the right angle. French flair, perhaps.. This seems like a reasonable explanation to me. -David
On Fri, Mar 15, 2024 at 06:37:35PM +1300, David Steele wrote: > This seems like a reasonable explanation to me. FYI, drongo has just passed the test. fairywren uses TAP, does not run the recovery tests. -- Michael