Thread: 039_end_of_wal: error in "xl_tot_len zero" test
Hello, hackers, I believe there is a small problem in the 039_end_of_wal.pl's "xl_tot_len zero" test. It supposes that after immediate shutdown the server, upon startup recovery, should always produce a message matching "invalid record length at .*: wanted 24, got 0". However, if the circumstances are just right and we happened to hit exactly on the edge of WAL page, then the message on startup recovery would be "invalid magic number 0000 in log segment .*, offset .*". The test does not take that into account. Now, reproducing this is somewhat tricky, because exact position in WAL at the test time depends on what exactly initdb did, and that not only differs in different major verisons, but also depends on e.g. username length, locales available, and, perhaps, more. Even though originally this problem was found "in the wild" on one particular system on one particular code branch, I've written small helper patch to make reproduction on master easier, see 0001-repro-for-039_end_of_wal-s-problem-with-page-end.patch. This patch adds single emit_message of (hopefully) the right size to make sure we hit end of WAL block right by the time we call $node->stop('immediate') in "xl_tot_len zero" test. With this patch, "xl_tot_len zero" test fails every time because the server writes "invalid magic number 0000 in log segment" while the test still only expects "invalid record length at .*: wanted 24, got 0". If course, this 0001 patch is *not* meant to be committed, but only as an issue reproduction helper. I can think of two possible fixes: 1. Update advance_out_of_record_splitting_zone to also avoid stopping at exactly the block end: my $page_offset = $end_lsn % $WAL_BLOCK_SIZE; - while ($page_offset >= $WAL_BLOCK_SIZE - $page_threshold) + while ($page_offset >= $WAL_BLOCK_SIZE - $page_threshold || $page_offset <= $SizeOfXLogShortPHD) { see 0002-fix-xl_tot_len-zero-test-amend-advance_out_of.patch We need to compare with $SizeOfXLogShortPHD (and not with zero) because at that point, even though we didn't actually write out new WAL page yet, it's header is already in place in memory and taken in account for LSN reporting. 2. Alternatively, amend "xl_tot_len zero" test to expect "invalid magic number 0000 in WAL segment" message as well: $node->start; ok( $node->log_contains( + "invalid magic number 0000 in WAL segment|" . "invalid record length at .*: expected at least 24, got 0", $log_size ), see 0003-alt.fix-for-xl_tot_len-zero-test-accept-invalid.patch I think it makes sense to backport whatever the final change would be to all branches with 039_end_of_wal (REL_12+). Any thoughts? Anton Voloshin Postgres Professional, The Russian Postgres Company https://postgrespro.ru
Attachment
On Fri, Jan 19, 2024 at 1:47 AM Anton Voloshin <a.voloshin@postgrespro.ru> wrote: > I believe there is a small problem in the 039_end_of_wal.pl's > "xl_tot_len zero" test. It supposes that after immediate shutdown the > server, upon startup recovery, should always produce a message matching > "invalid record length at .*: wanted 24, got 0". However, if the > circumstances are just right and we happened to hit exactly on the edge > of WAL page, then the message on startup recovery would be "invalid > magic number 0000 in log segment .*, offset .*". The test does not take > that into account. Hi Anton, Thanks for figuring this out! Right, I see. I will look more closely when I'm back from summer vacation in a few days, but first reaction: > Now, reproducing this is somewhat tricky, because exact position in WAL > at the test time depends on what exactly initdb did, and that not only > differs in different major verisons, but also depends on e.g. username > length, locales available, and, perhaps, more. Even though originally > this problem was found "in the wild" on one particular system on one > particular code branch, I've written small helper patch to make > reproduction on master easier, see > 0001-repro-for-039_end_of_wal-s-problem-with-page-end.patch. > > This patch adds single emit_message of (hopefully) the right size to > make sure we hit end of WAL block right by the time we call > $node->stop('immediate') in "xl_tot_len zero" test. With this patch, > "xl_tot_len zero" test fails every time because the server writes > "invalid magic number 0000 in log segment" while the test still only > expects "invalid record length at .*: wanted 24, got 0". If course, this > 0001 patch is *not* meant to be committed, but only as an issue > reproduction helper. > > I can think of two possible fixes: > > 1. Update advance_out_of_record_splitting_zone to also avoid stopping at > exactly the block end: > > my $page_offset = $end_lsn % $WAL_BLOCK_SIZE; > - while ($page_offset >= $WAL_BLOCK_SIZE - $page_threshold) > + while ($page_offset >= $WAL_BLOCK_SIZE - $page_threshold || > $page_offset <= $SizeOfXLogShortPHD) > { > see 0002-fix-xl_tot_len-zero-test-amend-advance_out_of.patch > > We need to compare with $SizeOfXLogShortPHD (and not with zero) because > at that point, even though we didn't actually write out new WAL page > yet, it's header is already in place in memory and taken in account > for LSN reporting. I like the fact that this preserves the same end-of-WAL case that we're trying to test. I don't yet have an opinion on the best way to do it though. Would it be enough to add emit_message($node, 0) after advance_out_of_record_splitting_zone()? The thing about this one specific test that is different from the later ones is that it doesn't actually write a record header at all, it was relying purely on pre-existing trailing zeroes, but it assumed the page header would be valid. As you figured out, that isn't true if we were right on the page boundary. Perhaps advance_out_of_record_splitting_zone() followed by emit_message(0) would make that always true, even then? > 2. Alternatively, amend "xl_tot_len zero" test to expect "invalid magic > number 0000 in WAL segment" message as well: > > $node->start; > ok( $node->log_contains( > + "invalid magic number 0000 in WAL segment|" . > "invalid record length at .*: expected at least 24, got 0", > $log_size > ), > see 0003-alt.fix-for-xl_tot_len-zero-test-accept-invalid.patch Tolerating the two different messages would weaken the test. > I think it makes sense to backport whatever the final change would be to > all branches with 039_end_of_wal (REL_12+). +1
On Fri, Jan 19, 2024 at 11:35:30AM +1300, Thomas Munro wrote: > On Fri, Jan 19, 2024 at 1:47 AM Anton Voloshin > <a.voloshin@postgrespro.ru> wrote: >> I believe there is a small problem in the 039_end_of_wal.pl's >> "xl_tot_len zero" test. It supposes that after immediate shutdown the >> server, upon startup recovery, should always produce a message matching >> "invalid record length at .*: wanted 24, got 0". However, if the >> circumstances are just right and we happened to hit exactly on the edge >> of WAL page, then the message on startup recovery would be "invalid >> magic number 0000 in log segment .*, offset .*". The test does not take >> that into account. > > Thanks for figuring this out! Right, I see. I will look more closely > when I'm back from summer vacation in a few days, but first reaction: Thomas, are you planning to look at that? -- Michael
Attachment
Hello, Thomas, On 19/01/2024 01:35, Thomas Munro wrote: > I don't yet have an opinion on the best way to > do it though. Would it be enough to add emit_message($node, 0) after > advance_out_of_record_splitting_zone()? Yes, indeed that seems to be enough. At least I could not produce any more "xl_tot_len zero" failures with that addition. I like this solution the best. > Tolerating the two different messages would weaken the test. I agree, I also don't really like this option. -- Anton Voloshin Postgres Professional, The Russian Postgres Company https://postgrespro.ru
On Thu, Feb 15, 2024 at 10:40 PM Anton Voloshin <a.voloshin@postgrespro.ru> wrote: > On 19/01/2024 01:35, Thomas Munro wrote: > > I don't yet have an opinion on the best way to > > do it though. Would it be enough to add emit_message($node, 0) after > > advance_out_of_record_splitting_zone()? > > Yes, indeed that seems to be enough. At least I could not produce any > more "xl_tot_len zero" failures with that addition. > > I like this solution the best. Oh, it looks like this new build farm animal "skimmer" might be reminding us about this issue: https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=skimmer&br=HEAD I don't know why it changed, but since this is an LSN/page alignment thing, it could be due to external things like an OS upgrade adding more locales or something that affects initdb. Will look soon and fix.
Thomas Munro <thomas.munro@gmail.com> writes: > Oh, it looks like this new build farm animal "skimmer" might be > reminding us about this issue: > https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=skimmer&br=HEAD > I don't know why it changed, At this point it seems indisputable that 7d2c7f08d9 is what broke skimmer, but that didn't go anywhere near WAL-related code, so how? My best guess is that that changed the amount of WAL generated by initdb just enough to make the problem reproduce on this animal. However, why's it *only* happening on this animal? The amount of WAL we generate isn't all that system-specific. regards, tom lane
On Mon, 6 May 2024 at 15:06, Tom Lane <tgl@sss.pgh.pa.us> wrote: > My best guess is that that changed the amount of WAL generated by > initdb just enough to make the problem reproduce on this animal. > However, why's it *only* happening on this animal? The amount of > WAL we generate isn't all that system-specific. I'd say that's a good theory as it's now passing again [1] after the recent system_views.sql change done in 521a7156ab. David [1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skimmer&dt=2024-05-06%2017%3A43%3A38
David Rowley <dgrowleyml@gmail.com> writes: > On Mon, 6 May 2024 at 15:06, Tom Lane <tgl@sss.pgh.pa.us> wrote: >> My best guess is that that changed the amount of WAL generated by >> initdb just enough to make the problem reproduce on this animal. >> However, why's it *only* happening on this animal? The amount of >> WAL we generate isn't all that system-specific. > I'd say that's a good theory as it's now passing again [1] after the > recent system_views.sql change done in 521a7156ab. Hm. It occurs to me that there *is* a system-specific component to the amount of WAL emitted during initdb: the number of locales that "locale -a" prints translates directly to the number of rows inserted into pg_collation. So maybe skimmer has a locale set that's a bit different from anybody else's, and that's what let it see this issue. regards, tom lane
On 13/05/2024 00:39, Tom Lane wrote: > Hm. It occurs to me that there *is* a system-specific component to > the amount of WAL emitted during initdb: the number of locales > that "locale -a" prints translates directly to the number of > rows inserted into pg_collation. [...] Yes. Another system-specific circumstance affecting WAL position is the name length of the unix user doing initdb. I've seen 039_end_of_wal failing consistently under user aaaaaaaa but working fine with aaaa, both on the same machine at the same time. To be more precise, on one particular machine under those particular circumstances (including set of locales) it would work for any username with length < 8 or >= 16, but would fail for length 8..15 (in bytes, not characters, if non-ASCII usernames were used). -- Anton Voloshin Postgres Professional, The Russian Postgres Company https://postgrespro.ru
I am seeing the exact problem described in this thread on my laptop since commit 490f869. I have yet to do a thorough investigation, but what I've seen thus far does seem to fit the subtle-differences-in-generated-WAL theory. If no one is planning to pick up the fix soon, I will try to. -- nathan
On Sat, Aug 24, 2024 at 10:33 AM Nathan Bossart <nathandbossart@gmail.com> wrote: > I am seeing the exact problem described in this thread on my laptop since > commit 490f869. I have yet to do a thorough investigation, but what I've > seen thus far does seem to fit the subtle-differences-in-generated-WAL > theory. If no one is planning to pick up the fix soon, I will try to. Sorry for dropping that. It looks like we know approximately how to stabilise it, and I'll look at it early next week if you don't beat me to it, but please feel free if you would like to.
Thomas Munro <thomas.munro@gmail.com> writes: > The fix I propose to commit shortly is just the first of those new > lines, to homogenise the initial state. See attached. The previous > idea works too, I think, but this bigger hammer is more obviously > removing variation. +1, but a comment explaining the need for the pg_switch_wal call seems in order. regards, tom lane
On Thu, Aug 29, 2024 at 01:55:27AM -0400, Tom Lane wrote: > Thomas Munro <thomas.munro@gmail.com> writes: >> The fix I propose to commit shortly is just the first of those new >> lines, to homogenise the initial state. See attached. The previous >> idea works too, I think, but this bigger hammer is more obviously >> removing variation. > > +1, but a comment explaining the need for the pg_switch_wal call > seems in order. +1 -- nathan
Pushed. Thanks!
On Fri, Nov 15, 2024 at 4:54 AM Christoph Berg <myon@debian.org> wrote: > postgresql 13.17, Debian bullseye, amd64: > > t/039_end_of_wal.pl .................. Dubious, test returned 2 (wstat 512, 0x200) This seems to be the interesting bit: ******** build/src/test/recovery/tmp_check/log/regress_log_039_end_of_wal ******** No such file or directory at /home/myon/projects/postgresql/debian/13/build/../src/test/perl/TestLib.pm line 655. I assume that must be coming from: my @scan_result = scan_server_header('access/xlog_internal.h', ... which reaches: open my $header_h, '<', "$stdout/$header_path" or die "$!"; <-- line 655 Not sure yet what is different in this environment or why you're suddenly noticing on 13.17. The logic has been there since 13.13 (ie it was backpatched then).
On Fri, Nov 15, 2024 at 11:21 AM Thomas Munro <thomas.munro@gmail.com> wrote: > Not sure yet what is different in this environment or why you're > suddenly noticing on 13.17. The logic has been there since 13.13 (ie > it was backpatched then). Hi Christoph, Also why only this branch, when they all have it? Reproducible or one-off? Do you have any more clues?
Re: Thomas Munro > Also why only this branch, when they all have it? Reproducible or > one-off? Do you have any more clues? It is (was?) weird. The PG13 build for bullseye on apt.pg.o worked normally, but when I built for bullseye-security (Debian LTS), I had to remove the patch that removed the 039_end_of_wal.pl file because you had changed the file. The build then failed and I put the remove-that-file-patch back. I did not try to repeat the build. I'll try again with today's releases. Christoph
Re: To Thomas Munro > It is (was?) weird. The PG13 build for bullseye on apt.pg.o worked > normally, but when I built for bullseye-security (Debian LTS), I had > to remove the patch that removed the 039_end_of_wal.pl file because > you had changed the file. The build then failed and I put the > remove-that-file-patch back. I did not try to repeat the build. > > I'll try again with today's releases. ******** build/src/test/recovery/tmp_check/log/regress_log_039_end_of_wal ******** No such file or directory at /home/myon/projects/postgresql/debian/13/build/../src/test/perl/TestLib.pm line 655. It's still happening, but it's Debian's fault. We used to lobotomize the relocatability of the binaries a bit to cater for our directory layout, so in the PG13 in bullseye, includedir-server is /usr/include/postgresql/13/server even when called from a temp tree. https://salsa.debian.org/postgresql/postgresql/-/blob/13-bullseye/debian/patches/50-per-version-dirs.patch?ref_type=heads In the meantime, I had noticed that the logic is no longer required (if it ever was), so the newer branches (including 13 for apt.pg.o) have a much shorter version of that patch: https://salsa.debian.org/postgresql/postgresql/-/blob/13/debian/patches/50-per-version-dirs.patch?ref_type=heads so tl;dr: it's been fixed in the meantime, sorry for the noise. Christoph