Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-25115

Recognize failures indicated by JUnit test result file

      Currently, the build failure analyzer plugin requires the user to explicitly define a match expression for each failure.
      However, many jobs already produce a JUnit result file, where the different testcases are listed, and failed ones are indicated as such.
      It would be great if the build failure analyzer plugin could also use this JUnit file, and automatically report a failure based on the name of the failed testcase (as mentioned in the JUnit result file).

      This avoids users having to create redundant match expressions as the testcases itself already know when something failed.

      A particular case where this is useful is when using the Build Monitor plugin, which has support for Build Failure Analyzer to display the cause of failed builds.

          [JENKINS-25115] Recognize failures indicated by JUnit test result file

          What is your feedback on this request? Is it something you agree is useful? How difficult would it be to implement?

          Thomas De Schampheleire added a comment - What is your feedback on this request? Is it something you agree is useful? How difficult would it be to implement?

          Since we don't have any indication types not working with the build log, it would require some changes in the code
          to add one. However, it shouldn't be that hard I presume. I think it sounds like a good idea, but I don't think it's something
          we will prioritize. that said, we are open for contributions

          Tomas Westling added a comment - Since we don't have any indication types not working with the build log, it would require some changes in the code to add one. However, it shouldn't be that hard I presume. I think it sounds like a good idea, but I don't think it's something we will prioritize. that said, we are open for contributions

          Thomas De Schampheleire added a comment - - edited

          I made a stab at implementing this feature. The code is not ready for integration, as I simply disabled the original behavior that inspects the build log, but I hope you could help with that. I also did not create any classes as I was not sure how you would like to have it.
          The code simply shows how to obtain the test result data and convert it into the build-failure-analyzer causes.

          Could you help with making this ready for integration?

          (edit: remove code and use gist instead)
          See https://gist.github.com/patrickdepinguin/65dd6ddde94aa646c1a8
          for the diff and an example job to test.

          Make sure to add a Post-build action to 'publish JUnit test report' pointing to the file 'results.junit'.

          After running the job, you should see both the standard JUnit test result list, as the BFA output.

          Thomas De Schampheleire added a comment - - edited I made a stab at implementing this feature. The code is not ready for integration, as I simply disabled the original behavior that inspects the build log, but I hope you could help with that. I also did not create any classes as I was not sure how you would like to have it. The code simply shows how to obtain the test result data and convert it into the build-failure-analyzer causes. Could you help with making this ready for integration? (edit: remove code and use gist instead) See https://gist.github.com/patrickdepinguin/65dd6ddde94aa646c1a8 for the diff and an example job to test. Make sure to add a Post-build action to 'publish JUnit test report' pointing to the file 'results.junit'. After running the job, you should see both the standard JUnit test result list, as the BFA output.

          I think this requires a bigger overhaul.
          Since this is the first time we would be looking into other files, I think we would need:

          • A new indication type.
          • A new FailureReader.
          • Changes to the BuildFailureScanner so that it can care about files other than the build log.

          We should also think about how to make it easier for others to add new files to scan through.
          Since we are breaking ground in what file we are scanning, the next one should be eaiser
          to add than this one.

          I can't help you much in writing the code, since we aren't prioritizing this feature, however,
          I'll gladly code review it.

          Tomas Westling added a comment - I think this requires a bigger overhaul. Since this is the first time we would be looking into other files, I think we would need: A new indication type. A new FailureReader. Changes to the BuildFailureScanner so that it can care about files other than the build log. We should also think about how to make it easier for others to add new files to scan through. Since we are breaking ground in what file we are scanning, the next one should be eaiser to add than this one. I can't help you much in writing the code, since we aren't prioritizing this feature, however, I'll gladly code review it.

          Note that we're not really reading a new file: the data needed is obtained directly from the Jenkins API, which has already parsed the JUnit result file and placed its output in TestResult objects.
          So it's not that we will be able to make users define a list of extra files to scan for indications.

          Regarding the class reorganization proposal:

          • you mention to add a new indication type, but I don't think we need to go into that level. The TestResult objects we can obtain directly from Jenkins are actually more a Cause than an Indication. At least, that's how I envision the feature: I do not want to configure BFA to specify which indications contribute to which cause. Rather, any failure reported via test results should be treated as a cause in BFA.
            So for each test failure, we create a new Cause.
          • for the same reason (given my limited understanding of the current class layout) I don't think we need a FailureReader either, as the FailureReader seems responsible of finding indications which we don't need.
            Unless you want to overhaul things completely, so that a FailureReader does not search and returns indications, but rather returns the causes. If it needs indications for that, fine (current code) and if it can query TestResult files (proposed new feature) then equally fine.
            Of course this would need a large redesign according to me, I'm not sure if it's worth the effort given that the code really needed for the feature is very simple and straightforward (as shown in the gist).

          A simpler integration would be this:
          The BuildFailureScanner::scan method gathers a list of foundCauses using the ::findCauses method (original code), but in addition also gathers a foundFailedTests list using a new method ::findFailedTests (note how I departed from the JUnit terminology, as in fact TestResults could be created using various other publishers next to JUnit.)
          The foundCauses and the foundFailedTests (also a list of FoundFailureCauses) are concatenated and then registered with the action (as is done today).

          The ::findFailedTests method is basically the code that I showed in the gist, and could be implemented directly in the BuildFailureScanner class, since it is the equivalent of the ::findCauses method.

          This proposal is of course not much different than my original gist, except for the fact that the original foundCauses are taken into account, and that the newly added code is placed in a method of its own.

          Given my extra clarifications above, how do you feel about this alternative proposal? Thanks!

          Thomas De Schampheleire added a comment - Note that we're not really reading a new file: the data needed is obtained directly from the Jenkins API, which has already parsed the JUnit result file and placed its output in TestResult objects. So it's not that we will be able to make users define a list of extra files to scan for indications. Regarding the class reorganization proposal: you mention to add a new indication type, but I don't think we need to go into that level. The TestResult objects we can obtain directly from Jenkins are actually more a Cause than an Indication. At least, that's how I envision the feature: I do not want to configure BFA to specify which indications contribute to which cause. Rather, any failure reported via test results should be treated as a cause in BFA. So for each test failure, we create a new Cause. for the same reason (given my limited understanding of the current class layout) I don't think we need a FailureReader either, as the FailureReader seems responsible of finding indications which we don't need. Unless you want to overhaul things completely, so that a FailureReader does not search and returns indications, but rather returns the causes. If it needs indications for that, fine (current code) and if it can query TestResult files (proposed new feature) then equally fine. Of course this would need a large redesign according to me, I'm not sure if it's worth the effort given that the code really needed for the feature is very simple and straightforward (as shown in the gist). A simpler integration would be this: The BuildFailureScanner::scan method gathers a list of foundCauses using the ::findCauses method (original code), but in addition also gathers a foundFailedTests list using a new method ::findFailedTests (note how I departed from the JUnit terminology, as in fact TestResults could be created using various other publishers next to JUnit.) The foundCauses and the foundFailedTests (also a list of FoundFailureCauses) are concatenated and then registered with the action (as is done today). The ::findFailedTests method is basically the code that I showed in the gist, and could be implemented directly in the BuildFailureScanner class, since it is the equivalent of the ::findCauses method. This proposal is of course not much different than my original gist, except for the fact that the original foundCauses are taken into account, and that the newly added code is placed in a method of its own. Given my extra clarifications above, how do you feel about this alternative proposal? Thanks!

          I think I misunderstood your implementation the first time around, thanks for clarifying.
          I looked at the description of the issue and didn't register that you actually see the junit results
          as sort of a knowledge base in itself, in parallel to the existing one, rather than having new
          indications in the existing knowledge base.

          I think the simple solution seems good, I'm only thinking that we in some way are duplicating information
          on the build page. The Junit test results are already there, now the failures will be shown by
          the BFA once more. Due to this I was considering making the feature optional in the BFA configuration.

          Another small thing to consider is the categorization, if suddenly all Junit failures end up in the
          statistics, you should be able to set a category on them, atleast a default one, maybe configurable?

          Tomas Westling added a comment - I think I misunderstood your implementation the first time around, thanks for clarifying. I looked at the description of the issue and didn't register that you actually see the junit results as sort of a knowledge base in itself, in parallel to the existing one, rather than having new indications in the existing knowledge base. I think the simple solution seems good, I'm only thinking that we in some way are duplicating information on the build page. The Junit test results are already there, now the failures will be shown by the BFA once more. Due to this I was considering making the feature optional in the BFA configuration. Another small thing to consider is the categorization, if suddenly all Junit failures end up in the statistics, you should be able to set a category on them, atleast a default one, maybe configurable?

          Tack Tomas for getting back promptly.

          Yes, I agree that this feature should be configurable, default disabled. As you noticed, you will get some duplication on the build page. However, my main use case is the Build Monitor plugin, which can display results from Build Failure Analyzer (and does not know anything of test results either). So if I can import the test results into Build Failure Analyzer, they will be displayed directly in the Build Monitor plugin.
          The fact that the build page has duplicate info is not important to me. For me it would even be fine to hide the output of BFA on that page, or collapse it by default, but I guess this may be counter-intuitive for other users.

          Regarding categorization, I haven't used this feature yet. What you mean is to define a category that will be automatically assigned to all causes coming from test results, correct?
          A variation of this is to allow users to specify which category/categories, if any, they want to assign these causes to, from the configuration page.

          In this case, the configuration wrt this feature would be:

          • enable/disable importing of failed test results as failure causes
          • space-separated list of categories to use for these causes

          Is this what you had in mind?
          How to handle such configuration, can you provide some pointers?

          Thomas De Schampheleire added a comment - Tack Tomas for getting back promptly. Yes, I agree that this feature should be configurable, default disabled. As you noticed, you will get some duplication on the build page. However, my main use case is the Build Monitor plugin, which can display results from Build Failure Analyzer (and does not know anything of test results either). So if I can import the test results into Build Failure Analyzer, they will be displayed directly in the Build Monitor plugin. The fact that the build page has duplicate info is not important to me. For me it would even be fine to hide the output of BFA on that page, or collapse it by default, but I guess this may be counter-intuitive for other users. Regarding categorization, I haven't used this feature yet. What you mean is to define a category that will be automatically assigned to all causes coming from test results, correct? A variation of this is to allow users to specify which category/categories, if any, they want to assign these causes to, from the configuration page. In this case, the configuration wrt this feature would be: enable/disable importing of failed test results as failure causes space-separated list of categories to use for these causes Is this what you had in mind? How to handle such configuration, can you provide some pointers?

          Yes, that is what I mean by categorization and that configuration is precisely what I had in mind
          I think it will be better than just auto-assigning "JUnit" category to them ( but that could be the default ).

          Handling this configuration should be done in the main BFA config section of the global config.
          The PluginImpl class and config.jelly of this class are what takes care of the configuration, then
          the information needed in your code can be gotten by PluginImpl.getInstance().getNameOfYourGetter();

          Tomas Westling added a comment - Yes, that is what I mean by categorization and that configuration is precisely what I had in mind I think it will be better than just auto-assigning "JUnit" category to them ( but that could be the default ). Handling this configuration should be done in the main BFA config section of the global config. The PluginImpl class and config.jelly of this class are what takes care of the configuration, then the information needed in your code can be gotten by PluginImpl.getInstance().getNameOfYourGetter();

          I have code ready to implement these changes. How do you prefer to exchange this?

          Thomas De Schampheleire added a comment - I have code ready to implement these changes. How do you prefer to exchange this?

          Greatm, I'd prefer a pull request on github.

          Tomas Westling added a comment - Greatm, I'd prefer a pull request on github.

          Jan Feindt added a comment -

          The analysis of test errors is a feature that I would also like to use. However, in context of integration testing I would like to detect communication or configuration errors.
          For this i would like to use a multi-line regex for failure details, failure stacktrace, failure stderr and failure stdout.
          Any suggestions?

          Jan Feindt added a comment - The analysis of test errors is a feature that I would also like to use. However, in context of integration testing I would like to detect communication or configuration errors. For this i would like to use a multi-line regex for failure details, failure stacktrace, failure stderr and failure stdout. Any suggestions?

          @Jan Feindt: I don't fully understand your use case.
          The code I proposed simply takes any failing test build and treats it as a failure cause using the name of the failing build, and showing the stacktrace.

          How would the regexp feature fit into this?

          Thomas De Schampheleire added a comment - @Jan Feindt: I don't fully understand your use case. The code I proposed simply takes any failing test build and treats it as a failure cause using the name of the failing build, and showing the stacktrace. How would the regexp feature fit into this?

          Jan Feindt added a comment - - edited

          This is my use case: some integration tests test business logic where registry keys are used.
          Sometimes there are problems accessing the registry:
          Caused by: java.util.prefs.BackingStoreException: Could not open windowsregistry node Software\JavaSoft\Prefs at root 0x80000001.

          This environment problem should be detected and displayed with an description different from other failing tests.

          The regexp in the given example: .*Could not open windowsregistry node.*

          Jan Feindt added a comment - - edited This is my use case: some integration tests test business logic where registry keys are used. Sometimes there are problems accessing the registry: Caused by: java.util.prefs.BackingStoreException: Could not open windowsregistry node Software\JavaSoft\Prefs at root 0x80000001. This environment problem should be detected and displayed with an description different from other failing tests. The regexp in the given example: .*Could not open windowsregistry node.*

          Thanks, I understand now.
          However, in contrast with the currently proposed implementation, this does require some explicitly added entries in the known failure causes database. So in this case, the comments given by Tomas in the second comment (#comment-214614) do apply. I think such feature could co-exist with the current more simple implementation.

          Thomas De Schampheleire added a comment - Thanks, I understand now. However, in contrast with the currently proposed implementation, this does require some explicitly added entries in the known failure causes database. So in this case, the comments given by Tomas in the second comment (#comment-214614) do apply. I think such feature could co-exist with the current more simple implementation.

          Jan Feindt added a comment -

          Jan Feindt added a comment - opened https://issues.jenkins-ci.org/browse/JENKINS-25437

          Thomas De Schampheleire added a comment - Pull request for the proposed code sent: https://github.com/jenkinsci/build-failure-analyzer-plugin/pull/25

          Seems that this breaks some tests: https://jenkins.ci.cloudbees.com/job/plugins/job/build-failure-analyzer-plugin/75/

          I will look at this. Is it possible to amend to an existing pull request, or should I launch a new one?

          Thomas De Schampheleire added a comment - Seems that this breaks some tests: https://jenkins.ci.cloudbees.com/job/plugins/job/build-failure-analyzer-plugin/75/ I will look at this. Is it possible to amend to an existing pull request, or should I launch a new one?

          Yes it is possible, you simply update your repository that you did the pull request from and the request is updated.

          Tomas Westling added a comment - Yes it is possible, you simply update your repository that you did the pull request from and the request is updated.

          Pull request updated and validated fine this time...

          Thomas De Schampheleire added a comment - Pull request updated and validated fine this time...

          This issue can now be closed after release of 1.11.0

          I do think that the description on https://wiki.jenkins-ci.org/display/JENKINS/Build+Failure+Analyzer
          should be updated to mention the new feature (currently it's only briefly mentioned in the list of releases.

          Thomas De Schampheleire added a comment - This issue can now be closed after release of 1.11.0 I do think that the description on https://wiki.jenkins-ci.org/display/JENKINS/Build+Failure+Analyzer should be updated to mention the new feature (currently it's only briefly mentioned in the list of releases.

          J John added a comment -

          Does this release also parse mstest result files?

          J John added a comment - Does this release also parse mstest result files?

          This change does not handling the parsing of any result files. The parsing happens by a post-build publisher, which can be a JUnit publisher, xUnit publisher, or anything else. These publishers create TestResult objects in Jenkins, which are then read by this plugin.

          Are you already using https://wiki.jenkins-ci.org/display/JENKINS/MSTest+Plugin ?

          Thomas De Schampheleire added a comment - This change does not handling the parsing of any result files. The parsing happens by a post-build publisher, which can be a JUnit publisher, xUnit publisher, or anything else. These publishers create TestResult objects in Jenkins, which are then read by this plugin. Are you already using https://wiki.jenkins-ci.org/display/JENKINS/MSTest+Plugin ?

            t_westling Tomas Westling
            patrickdepinguin Thomas De Schampheleire
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

              Created:
              Updated: