Thread: Improving tracking/processing of buildfarm test failures
Hello hackers,
I'd like to discuss ways to improve the buildfarm experience for anyone who
are interested in using information which buildfarm gives to us.
Unless I'm missing something, as of now there are no means to determine
whether some concrete failure is known/investigated or fixed, how
frequently it occurs and so on... From my experience, it's not that
unbelievable that some failure occurred two years ago and lost in time was
an indication of e. g. a race condition still existing in the code/tests
and thus worth fixing. But without classifying/marking failures it's hard
to find such or other interesting failure among many others...
The first way to improve things I can imagine is to add two fields to the
buildfarm database: a link to the failure discussion (set when the failure
is investigated/reproduced and reported in -bugs or -hackers) and a commit
id/link (set when the failure is fixed). I understand that it requires
modifying the buildfarm code, and adding some UI to update these fields,
but it allows to add filters to see only unknown/non-investigated failures
in the buildfarm web interface later.
The second way is to create a wiki page, similar to "PostgreSQL 17 Open
Items", say, "Known buildfarm test failures" and fill it like below:
<url to failure1>
<url to failure2>
...
Useful info from the failure logs for reference
...
<link to -hackers thread>
---
This way is less invasive, but it would work well only if most of
interested people know of it/use it.
(I could start with the second approach, if you don't mind, and we'll see
how it works.)
Best regards,
Alexande
I'd like to discuss ways to improve the buildfarm experience for anyone who
are interested in using information which buildfarm gives to us.
Unless I'm missing something, as of now there are no means to determine
whether some concrete failure is known/investigated or fixed, how
frequently it occurs and so on... From my experience, it's not that
unbelievable that some failure occurred two years ago and lost in time was
an indication of e. g. a race condition still existing in the code/tests
and thus worth fixing. But without classifying/marking failures it's hard
to find such or other interesting failure among many others...
The first way to improve things I can imagine is to add two fields to the
buildfarm database: a link to the failure discussion (set when the failure
is investigated/reproduced and reported in -bugs or -hackers) and a commit
id/link (set when the failure is fixed). I understand that it requires
modifying the buildfarm code, and adding some UI to update these fields,
but it allows to add filters to see only unknown/non-investigated failures
in the buildfarm web interface later.
The second way is to create a wiki page, similar to "PostgreSQL 17 Open
Items", say, "Known buildfarm test failures" and fill it like below:
<url to failure1>
<url to failure2>
...
Useful info from the failure logs for reference
...
<link to -hackers thread>
---
This way is less invasive, but it would work well only if most of
interested people know of it/use it.
(I could start with the second approach, if you don't mind, and we'll see
how it works.)
Best regards,
Alexande
r
On Thu, May 23, 2024 at 4:30 PM Alexander Lakhin <exclusion@gmail.com> wrote: > > I'd like to discuss ways to improve the buildfarm experience for anyone who > are interested in using information which buildfarm gives to us. > > Unless I'm missing something, as of now there are no means to determine > whether some concrete failure is known/investigated or fixed, how > frequently it occurs and so on... From my experience, it's not that > unbelievable that some failure occurred two years ago and lost in time was > an indication of e. g. a race condition still existing in the code/tests > and thus worth fixing. But without classifying/marking failures it's hard > to find such or other interesting failure among many others... > > The first way to improve things I can imagine is to add two fields to the > buildfarm database: a link to the failure discussion (set when the failure > is investigated/reproduced and reported in -bugs or -hackers) and a commit > id/link (set when the failure is fixed). I understand that it requires > modifying the buildfarm code, and adding some UI to update these fields, > but it allows to add filters to see only unknown/non-investigated failures > in the buildfarm web interface later. > > The second way is to create a wiki page, similar to "PostgreSQL 17 Open > Items", say, "Known buildfarm test failures" and fill it like below: > <url to failure1> > <url to failure2> > ... > Useful info from the failure logs for reference > ... > <link to -hackers thread> > --- > This way is less invasive, but it would work well only if most of > interested people know of it/use it. > (I could start with the second approach, if you don't mind, and we'll see > how it works.) > I feel it is a good idea to do something about this. It makes sense to start with something simple and see how it works. I think this can also help us whether we need to chase a particular BF failure immediately after committing. -- With Regards, Amit Kapila.
On Thu, May 23, 2024 at 02:00:00PM +0300, Alexander Lakhin wrote: > I'd like to discuss ways to improve the buildfarm experience for anyone who > are interested in using information which buildfarm gives to us. > > Unless I'm missing something, as of now there are no means to determine > whether some concrete failure is known/investigated or fixed, how > frequently it occurs and so on... From my experience, it's not that > unbelievable that some failure occurred two years ago and lost in time was > an indication of e. g. a race condition still existing in the code/tests > and thus worth fixing. But without classifying/marking failures it's hard > to find such or other interesting failure among many others... I agree this is an area of difficulty consuming buildfarm results. I have an inefficient template for studying a failure, which your proposals would help: **** grep recent -hackers for animal name **** search the log for ~10 strings (e.g. "was terminated") to find the real indicator of where it failed **** search mailing lists for that indicator **** search buildfarm database for that indicator > The first way to improve things I can imagine is to add two fields to the > buildfarm database: a link to the failure discussion (set when the failure > is investigated/reproduced and reported in -bugs or -hackers) and a commit > id/link (set when the failure is fixed). I understand that it requires I bet the hard part is getting data submissions, so I'd err on the side of making this as easy as possible for submitters. For example, accept free-form text for quick notes, not only URLs and commit IDs. > modifying the buildfarm code, and adding some UI to update these fields, > but it allows to add filters to see only unknown/non-investigated failures > in the buildfarm web interface later. > > The second way is to create a wiki page, similar to "PostgreSQL 17 Open > Items", say, "Known buildfarm test failures" and fill it like below: > <url to failure1> > <url to failure2> > ... > Useful info from the failure logs for reference > ... > <link to -hackers thread> > --- > This way is less invasive, but it would work well only if most of > interested people know of it/use it. > (I could start with the second approach, if you don't mind, and we'll see > how it works.) Certainly you doing (2) can only help, though it may help less than (1). I recommend considering what the buildfarm server could discover and publish on its own. Examples: - N members failed at the same step, in a related commit range. Those members are now mostly green. Defect probably got fixed quickly. - Log contains the following lines that are highly correlated with failure. The following other reports, if any, also contained them.
Hello Amit and Noah, 24.05.2024 14:15, Amit Kapila wrote: > I feel it is a good idea to do something about this. It makes sense to > start with something simple and see how it works. I think this can > also help us whether we need to chase a particular BF failure > immediately after committing. 24.05.2024 23:00, Noah Misch wrote: > >> (I could start with the second approach, if you don't mind, and we'll see >> how it works.) > Certainly you doing (2) can only help, though it may help less than (1). Thank you for paying attention to this! I've created such page to accumulate information on test failures: https://wiki.postgresql.org/wiki/Known_Buildfarm_Test_Failures I've deliberately added a trivial issue with partition_split, which is doomed to be fixed soon, to test the information workflow, and I'm going to add a few other items in the coming days. Please share your comments and suggestions, if any. Best regards, Alexander
On 2024-05-24 Fr 16:00, Noah Misch wrote: > On Thu, May 23, 2024 at 02:00:00PM +0300, Alexander Lakhin wrote: >> I'd like to discuss ways to improve the buildfarm experience for anyone who >> are interested in using information which buildfarm gives to us. >> >> Unless I'm missing something, as of now there are no means to determine >> whether some concrete failure is known/investigated or fixed, how >> frequently it occurs and so on... From my experience, it's not that >> unbelievable that some failure occurred two years ago and lost in time was >> an indication of e. g. a race condition still existing in the code/tests >> and thus worth fixing. But without classifying/marking failures it's hard >> to find such or other interesting failure among many others... > I agree this is an area of difficulty consuming buildfarm results. I have an > inefficient template for studying a failure, which your proposals would help: > > **** grep recent -hackers for animal name > **** search the log for ~10 strings (e.g. "was terminated") to find the real indicator of where it failed > **** search mailing lists for that indicator > **** search buildfarm database for that indicator > >> The first way to improve things I can imagine is to add two fields to the >> buildfarm database: a link to the failure discussion (set when the failure >> is investigated/reproduced and reported in -bugs or -hackers) and a commit >> id/link (set when the failure is fixed). I understand that it requires > I bet the hard part is getting data submissions, so I'd err on the side of > making this as easy as possible for submitters. For example, accept free-form > text for quick notes, not only URLs and commit IDs. > >> modifying the buildfarm code, and adding some UI to update these fields, >> but it allows to add filters to see only unknown/non-investigated failures >> in the buildfarm web interface later. >> >> The second way is to create a wiki page, similar to "PostgreSQL 17 Open >> Items", say, "Known buildfarm test failures" and fill it like below: >> <url to failure1> >> <url to failure2> >> ... >> Useful info from the failure logs for reference >> ... >> <link to -hackers thread> >> --- >> This way is less invasive, but it would work well only if most of >> interested people know of it/use it. >> (I could start with the second approach, if you don't mind, and we'll see >> how it works.) > Certainly you doing (2) can only help, though it may help less than (1). > > > I recommend considering what the buildfarm server could discover and publish > on its own. Examples: > > - N members failed at the same step, in a related commit range. Those members > are now mostly green. Defect probably got fixed quickly. > > - Log contains the following lines that are highly correlated with failure. > The following other reports, if any, also contained them. > > I'm prepared to help, but also bear in mind that currently the only people who can submit notes are animal owners who can attach notes to their own animals. I'm not keen to allow general public submission of notes to the database. We already get lots of spam requests that we turn away. If you have queries that you want canned we can look at that. Ditto extra database fields. Currently we don't have any processing that correlates different failures, but that's not inconceivable. cheers andrew -- Andrew Dunstan EDB: https://www.enterprisedb.com
Hello hackers, 25.05.2024 15:00, I wrote: > I've created such page to accumulate information on test failures: > https://wiki.postgresql.org/wiki/Known_Buildfarm_Test_Failures > One month later, I'd like to summarize failures that I've investigated and classified during June, 2024 on the aforementioned wiki page. (Maybe it would make sense to issue a monthly report with such information in the future.) Imagining a hypothetical table, we could get such statistics: # SELECT br, count(*) FROM failures WHERE dt >= '2024-06-01' AND dt < '2024-07-01' GROUP BY br; REL_12_STABLE: 6 REL_13_STABLE: 14 REL_14_STABLE: 13 REL_15_STABLE: 10 REL_16_STABLE: 4 HEAD: 47 -- Total: 94 (Counting test failures only, excluding indent-check, Configure, Build errors.) # SELECT COUNT(*) FROM (SELECT DISTINCT issue_link FROM failures WHERE dt >= '2024-06-01' AND dt < '2024-07-01'); 21 # SELECT issue_link, count(*) FROM failures WHERE dt >= '2024-06-01' AND dt < '2024-07-01' GROUP BY issue_link ORDER BY 2 DESC LIMIT 7; https://www.postgresql.org/message-id/20240628051353.a0.nmisch@google.com: 13 -- https://wiki.postgresql.org/wiki/Known_Buildfarm_Test_Failures#inplace-inval.spec_fails_on_prion_and_trilobite_on_checking_relhasindex -- Fixed https://www.postgresql.org/message-id/95ca84ca-39b4-f6aa-260f-da5f73d05a90@gmail.com: 10 -- https://wiki.postgresql.org/wiki/Known_Buildfarm_Test_Failures#008_fsm_truncation_failing_on_dodo_in_v14-_due_to_slow_fsync -- An environmental issue https://www.postgresql.org/message-id/f748ee55-9e73-3f5e-e879-8865c5e9933a@gmail.com: 9 -- https://wiki.postgresql.org/wiki/Known_Buildfarm_Test_Failures#regress-running.2Fregress_fails_on_skink_due_to_timeout -- An environmental issue https://www.postgresql.org/message-id/d6ee8761-39d1-0033-1afb-d5a57ee056f2@gmail.com: 9 -- https://wiki.postgresql.org/wiki/Known_Buildfarm_Test_Failures#ssl_tests_.28001_ssltests.pl.2C_002_scram.pl.2C_003_sslinfo.pl.29_fail_due_to_TCP_port_conflict -- A fix proposed, commit pending https://www.postgresql.org/message-id/4cc2ee93-e03c-8e13-61ed-412e7e6ff19d@gmail.com: 9 -- https://wiki.postgresql.org/wiki/Known_Buildfarm_Test_Failures#plperl.sql_failing_in_v15-_on_caiman_with_a_newer_Perl_version -- Fixed https://www.postgresql.org/message-id/2509767.1719773880@sss.pgh.pa.us: 7 -- https://wiki.postgresql.org/wiki/Known_Buildfarm_Test_Failures#040_pg_createsubscriber.pl_fails_on_Windows_due_to_unterminated_quoted_string -- Fixed https://www.postgresql.org/message-id/847814.1715631450@sss.pgh.pa.us: 6 -- https://wiki.postgresql.org/wiki/Known_Buildfarm_Test_Failures#Isolation_tests_fail_on_hamerkop_with_.22too_many_clients.22_errors -- A fix proposed, commit pending # SELECT fix_link, count(*) FROM failures WHERE dt >= '2024-06-01' AND dt < '2024-07-01' AND fix_link IS NOT NULL GROUP BY fix_link ORDER BY 2 DESC; https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=458fada72: 13 https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=f853e23bf: 10 https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=a1333ec04: 7 https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=b96391382: 3 https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=e656657f2: 1 -- Total: 5 # SELECT log_link FROM failures WHERE dt >= '2024-06-01' AND dt < '2024-07-01' AND issue_link IS NULL; -- Not investigated/classified failures https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-06-17%2004%3A21%3A42 initdb: error: invalid locale settings; check LANG and LC_* environment variables https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kingsnake&dt=2024-06-27%2015%3A38%3A27 StopDb-C:4 pg_ctl: server does not shut down -- The most mysterious issue to me, more information needed https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kingsnake&dt=2024-06-13%2017%3A58%3A28 StopDb-C:4 pg_ctl: server does not shut down -- The most mysterious issue to me, more information needed https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-06-28%2001%3A06%3A00 # Running: pg_ctl -D C:\\prog\\bf\\root\\REL_16_STABLE\\pgsql.build/testrun/recovery/002_archiving\\data/t_002_archiving_standby_data/pgdata -l C:\\prog\\bf\\root\\REL_16_STABLE\\pgsql.build/testrun/recovery/002_archiving\\log/002_archiving_standby.log promote waiting for server to promote........................................................................................................................... stopped waiting pg_ctl: server did not promote in time -- Most probably the machine's performance issue, an issue report is pending. https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tamandua&dt=2024-06-04%2022%3A45%3A00 connection error: 'psql: error: connection to server on socket "/tmp/9IDPzZm7Pp/.s.PGSQL.63572" failed: FATAL: role "bf" does not exist' https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=crake&dt=2024-06-17%2016%3A02%3A03&stg=xversion-upgrade-REL_16_STABLE-HEAD program "postgres" is needed by pg_ctl but was not found in the same directory as "/home/andrew/bf/root/saves.crake/REL_16_STABLE/bin/pg_ctl" https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=crake&dt=2024-06-17%2017%3A07%3A03&stg=xversion-upgrade-REL_16_STABLE-HEAD program "postgres" is needed by pg_ctl but was not found in the same directory as "/home/andrew/bf/root/saves.crake/REL_16_STABLE/bin/pg_ctl" https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=caiman&dt=2024-07-01%2003%3A13%3A26 +ERROR: could not access file "/repos/build-farm-17/HEAD/inst/lib/postgresql/plpgsql.so": No such file or directory -- Total: 8 All the queries above are imaginary and some numbers could be inaccurate, but I think it still represents the current state of affairs. Best regards, Alexander
02.07.2024 15:00, Alexander Lakhin wrote: > > One month later, I'd like to summarize failures that I've investigated > and classified during June, 2024 on the aforementioned wiki page. > (Maybe it would make sense to issue a monthly report with such information > in the future.) Please take a look at July report on the buildfarm failures: # SELECT br, count(*) FROM failures WHERE dt >= '2024-07-01' AND dt < '2024-08-01' GROUP BY br; REL_12_STABLE: 11 REL_13_STABLE: 9 REL_14_STABLE: 7 REL_15_STABLE: 10 REL_16_STABLE: 9 REL_17_STABLE: 68 HEAD: 106 -- Total: 220 (Counting test failures only, excluding indent-check, Configure, Build errors.) # SELECT COUNT(*) FROM (SELECT DISTINCT issue_link FROM failures WHERE dt >= '2024-07-01' AND dt < '2024-08-01'); 40 # SELECT issue_link, count(*) FROM failures WHERE dt >= '2024-07-01' AND dt < '2024-08-01' GROUP BY issue_link ORDER BY 2 DESC LIMIT 9; https://www.postgresql.org/message-id/20240404170055.qynecay7szu3dgvu@awork3.anarazel.de: 29 -- An environmental issue https://www.postgresql.org/message-id/a9a97e83-9ec8-5de5-bf69-80e9560f5345@gmail.com: 20 -- Probably fixed https://www.postgresql.org/message-id/1545399.1720554797@sss.pgh.pa.us: 11 -- Fixed https://www.postgresql.org/message-id/4db099c8-4a52-3cc4-e970-14539a319466@gmail.com: 9 https://www.postgresql.org/message-id/db093cce-7eec-8516-ef0f-891895178c46@gmail.com: 8 -- An environmental issue; probably fixed https://www.postgresql.org/message-id/b2037a8d-fe6b-d299-da17-ff5f3214e648@gmail.com: 8 https://www.postgresql.org/message-id/3e2cbd24-f45e-4b2b-ba83-8149214f0a4d@dunslane.net: 8 -- Fixed https://www.postgresql.org/message-id/68de6498-0449-a113-dd03-e198dded0bac@gmail.com: 8 -- Fixed https://www.postgresql.org/message-id/3618203.1722473994@sss.pgh.pa.us: 8 -- Fixed # SELECT count(*) FROM failures WHERE dt >= '2024-07-01' AND dt < '2024-08-01' AND issue_link IS NULL; -- Unsorted/unhelpful failures 17 And one more metric, that might be useful, but it requires also time analysis — short-lived (eliminated immediately) failures: 83 I also wrote a simple script (see attached) to check for unknown buildfarm failures using "HTML API", to make sure no failures missed. Surely, it could be improved in many ways, but I find it rather useful as-is. Best regards, Alexander
Attachment
On 2024-08-01 Th 5:00 AM, Alexander Lakhin wrote: > > > I also wrote a simple script (see attached) to check for unknown > buildfarm > failures using "HTML API", to make sure no failures missed. Surely, it > could be improved in many ways, but I find it rather useful as-is. > > I think we can improve on that. Scraping HTML is not a terribly efficient way of doing it. I'd very much like to improve the reporting side of the server. cheers andrew -- Andrew Dunstan EDB: https://www.enterprisedb.com
Hello hackers, Please take a look at the August report on buildfarm failures: # SELECT br, count(*) FROM failures WHERE dt >= '2024-08-01' AND dt < '2024-09-01' GROUP BY br; REL_12_STABLE: 2 REL_13_STABLE: 2 REL_14_STABLE: 12 REL_15_STABLE: 3 REL_16_STABLE: 5 REL_17_STABLE: 17 HEAD: 38 -- Total: 79 (Counting test failures only, excluding indent-check, Configure, Build errors.) # SELECT COUNT(*) FROM (SELECT DISTINCT issue_link FROM failures WHERE dt >= '2024-08-01' AND dt < '2024-09-01'); 21 # SELECT issue_link, count(*) FROM failures WHERE dt >= '2024-08-01' AND dt < '2024-09-01' GROUP BY issue_link ORDER BY 2 DESC LIMIT 6; https://www.postgresql.org/message-id/8ce8261a-bf3a-25e6-b473-4808f50a6ea7%40gmail.com: 13 -- An environmental issue; fixed https://www.postgresql.org/message-id/a9a97e83-9ec8-5de5-bf69-80e9560f5345@gmail.com: 9 -- An environmental issue?; probably fixed https://www.postgresql.org/message-id/4db099c8-4a52-3cc4-e970-14539a319466@gmail.com: 7 -- Fixed https://www.postgresql.org/message-id/c720cdc3-5ce0-c410-4854-70788175ca2c@gmail.com: 6 -- Expected to be fixed with Release 18 of the buildfarm client https://www.postgresql.org/message-id/657815a2-5a89-fcc1-1c9d-d77a6986bc26@gmail.com: 5 https://www.postgresql.org/message-id/3618203.1722473994@sss.pgh.pa.us: 4 -- Fixed # SELECT count(*) FROM failures WHERE dt >= '2024-08-01' AND dt < '2024-09-01' AND issue_link IS NULL; -- Unsorted/unhelpful failures 13 Short-lived failures: 21 There were also two mysterious never-before-seen failures, both occurred on POWER animals: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kingsnake&dt=2024-08-19%2019%3A17%3A59 - REL_17_STABLE https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=iguana&dt=2024-08-29%2013%3A57%3A57 - REL_15_STABLE (I'm not sure yet, whether they should be considered "unhelpful". I'll wait for more information from these animals/buildfarm in general to determine what to do with these failures.) Best regards, Alexander
Hello everyone
I am a developer interested in this project. Had a little involvement with MariaDB and now I like to work on Postgres. Never worked with mailing lists so I am not sure if this is the way I should interact. Liked to be pointed to some tasks and documents to get started.
On 2024-09-01 Su 2:46 PM, sia kc wrote:
Hello everyoneI am a developer interested in this project. Had a little involvement with MariaDB and now I like to work on Postgres. Never worked with mailing lists so I am not sure if this is the way I should interact. Liked to be pointed to some tasks and documents to get started.
Do you mean you want to be involved with $subject, or that you just want to be involved in Postgres development generally? If the latter, then replying to a specific email thread is not the way to go, and the first thing to do is look at this wiki page <https://wiki.postgresql.org/wiki/Developer_FAQ>
cheers
andrew
-- Andrew Dunstan EDB: https://www.enterprisedb.com
Hello hackers, Please take a look at the October report on buildfarm failures: # SELECT br, count(*) FROM failures WHERE dt >= '2024-10-01' AND dt < '2024-11-01' GROUP BY br; REL_12_STABLE: 9 REL_13_STABLE: 9 REL_14_STABLE: 19 REL_15_STABLE: 25 REL_16_STABLE: 12 REL_17_STABLE: 14 master: 109 -- Total: 197 (Counting test failures only, excluding indent-check, Configure, Build errors.) # SELECT COUNT(*) FROM (SELECT DISTINCT issue_link FROM failures WHERE dt >= '2024-10-01' AND dt < '2024-11-01'); 22 # SELECT issue_link, count(*) FROM failures WHERE dt >= '2024-10-01' AND dt < '2024-11-01' GROUP BY issue_link ORDER BY 2 DESC LIMIT 6; https://www.postgresql.org/message-id/d63a3295-cac1-4a8e-9de1-0ebab996d53d@eisentraut.org: 54 -- Fixed https://www.postgresql.org/message-id/CAA4eK1K_ikMjeKqsOf9SsDAu4S8_CU6n15RP13-j4cMSKn-H+g@mail.gmail.com: 23 -- Fixed https://www.postgresql.org/message-id/362289.1730241666@sss.pgh.pa.us: 11 -- Will be fixed soon https://www.postgresql.org/message-id/2480333.1729784872@sss.pgh.pa.us: 6 -- Fixed https://www.postgresql.org/message-id/657815a2-5a89-fcc1-1c9d-d77a6986bc26@gmail.com: 5 https://www.postgresql.org/message-id/c638873f-8b1e-4770-ba49-5a0b3e140cd9@iki.fi: 4 -- Fixed # SELECT count(*) FROM failures WHERE dt >= '2024-10-01' AND dt < '2024-11-01' AND issue_link IS NULL; -- Unsorted/unhelpful failures 74 Short-lived failures: 165 (+ 11 from 362289.1730241666@sss.pgh.pa.us) Best regards, Alexander
Hello hackers, Please take a look at the November report on buildfarm failures: # SELECT br, count(*) FROM failures WHERE dt >= '2024-11-01' AND dt < '2024-12-01' GROUP BY br; REL_12_STABLE: 8 REL_13_STABLE: 8 REL_14_STABLE: 13 REL_15_STABLE: 10 REL_16_STABLE: 37 REL_17_STABLE: 29 master: 42 -- Total: 147 (Counting test failures only, excluding indent-check, Configure, Build errors.) # SELECT COUNT(*) FROM (SELECT DISTINCT issue_link FROM failures WHERE dt >= '2024-11-01' AND dt < '2024-12-01'); 26 # SELECT issue_link, count(*) FROM failures WHERE dt >= '2024-11-01' AND dt < '2024-12-01' GROUP BY issue_link ORDER BY 2 DESC LIMIT 6; https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=a34c33fd2 : 48 -- Fixed https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=c4252c9ef : 15 -- Fixed https://www.postgresql.org/message-id/E1tAZbM-001LGu-L8@gemulon.postgresql.org : 9 -- Fixed https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=73c9f91a1 : 9 -- Fixed https://www.postgresql.org/message-id/88785ee8-d50b-ace1-a9f2-34810ee900b5@gmail.com : 7 -- Fixed https://www.postgresql.org/message-id/a9a97e83-9ec8-5de5-bf69-80e9560f5345@gmail.com : 7 # SELECT count(*) FROM failures WHERE dt >= '2024-11-01' AND dt < '2024-12-01' AND issue_link IS NULL; -- Unsorted/unhelpful failures 6 Short-lived failures: 107 Best regards, Alexander