• Blue Ocean 1.2, Blue Ocean 1.3, Blue Ocean 1.4 - beta 1, Blue Ocean 1.4 - beta 3, Blue Ocean 1.4 - beta 2

      Scope

      • Prefix the test name with the Path to the stage or parallel the test was run in
      • This should have a unit test (at least)
      • Check that the sort works correctly and the failures on the same "path" are grouped

      Note
      There are other ways of displaying this data except they will all need some design work done. We don't have the capacity for this, so we will do the minimum here.

      Example

      Jenkinsfile

      stage ('Browser Tests') {
        parallel {
          stage ('Firefox') {
            sh 'mvn test'
          }
          stage ('Chrome') {
            sh 'mvn test'
          }
          stage ('Safari') {
            sh 'mvn test'
          }
          stage ('Internet Explorer') {
            sh 'mvn test'
          }
        }
      }
      

          [JENKINS-46166] Distingush tests by stage and parallel

          Andrew Bayer added a comment -

          FYI, jamesdumay, I am going to have to make some changes in junit to support this - the way we're gathering currently is by FlowNode#id where the actual test report is gathered (i.e., in the upcoming junitResults and xunit steps), so either we need to change that to also record the enclosing block(s) in some form or we need to have Blue Ocean map from the leaf FlowNode to its enclosing blocks... I'm tending towards the former (tracking the enclosing blocks in the TestResult mapping to the SuiteResult) at least in part because it'd make supporting something in the classic UI doable.

          Andrew Bayer added a comment - FYI, jamesdumay , I am going to have to make some changes in junit to support this - the way we're gathering currently is by FlowNode#id where the actual test report is gathered (i.e., in the upcoming junitResults and xunit steps), so either we need to change that to also record the enclosing block(s) in some form or we need to have Blue Ocean map from the leaf FlowNode to its enclosing blocks... I'm tending towards the former (tracking the enclosing blocks in the TestResult mapping to the SuiteResult ) at least in part because it'd make supporting something in the classic UI doable.

          Hey,

          Here from JENKINS-27395 as well. Each one of our tests executes in an parallel branch for concurrency purposes. Therefore, this approach would work for us as we maintain a one to one mapping of parallel branch to test.

          Thanks

          Andres Rodriguez added a comment - Hey, Here from JENKINS-27395 as well. Each one of our tests executes in an parallel branch for concurrency purposes. Therefore, this approach would work for us as we maintain a one to one mapping of parallel branch to test. Thanks

          James Dumay added a comment -

          abayer would we lose the ability to know what step parsed the test result then?

          James Dumay added a comment - abayer would we lose the ability to know what step parsed the test result then?

          James Dumay added a comment -

          philmcardlecg sorry my example was slightly incorrect. Check now.

          James Dumay added a comment - philmcardlecg sorry my example was slightly incorrect. Check now.

          Andrew Bayer added a comment -

          jamesdumay - I don't think so. I'm experimenting with possible approaches now.

          Andrew Bayer added a comment - jamesdumay - I don't think so. I'm experimenting with possible approaches now.

          James Dumay added a comment -

          Phil, if there are features missing would you mind sending me a quick brain dump at jdumay@cloudbees.com ?

          James Dumay added a comment - Phil, if there are features missing would you mind sending me a quick brain dump at jdumay@cloudbees.com ?

          Don't want to pile too many things into this ticket, but just wanted to add that it would be nice if the results were collapsible by stage.

          For your example data above above, how it would look when it opens:

           

          > Browser Tests / Firefox (1)
          > Browser Tests / Chrome (1)
          > Browser Tests / Internet Explorer (1)
          > Browser Tests / Safari (1)
          

          Then expand some of the entries:

           

          > Browser Tests / Firefox (1)
            > appstore.TestThisWillFailAbunch
          > Browser Tests / Chrome (1)
            > appstore.TestThisWillFailAbunch
          > Browser Tests / Internet Explorer (1)
          > Browser Tests / Safari (1)
          

          This grouping would make it slightly easier to parse the data when a large amount of test cases fail.

          For example, imagine that there are 500 other test cases that fail, TestThisWillFailAbunc[1..500]. In this scenario everything will be sorted in the order:

          > Browser Tests / Firefox - appstore.TestThisWillFailAbunch1
          > Browser Tests / Firefox - appstore.TestThisWillFailAbunch2
          > Browser Tests / Firefox - appstore.TestThisWillFailAbunch3
          ...
          > Browser Tests / Firefox - appstore.TestThisWillFailAbunch500
          > Browser Tests / Chrome - appstore.TestThisWillFailAbunch1
          ...
          > Browser Tests / Chrome - appstore.TestThisWillFailAbunch500
          ...

          Because the list is fully expanded, and it takes a long time to scroll, it is hard to answer simple questions like "Did it fail on all browsers, or just on Firefox?"

          The implementation doesn't have to be as I laid it. Just more or less the general idea that opening the results page and getting bombarded with 10000+ failing tests isn't great. And I think with the association of Junit to a stage it could finally be a good way to collapse it a bit.

          Sorry for the long post, wanted to drop by 2c

           

           

          Andres Rodriguez added a comment - Don't want to pile too many things into this ticket, but just wanted to add that it would be nice if the results were collapsible by stage. For your example data above above, how it would look when it opens:   > Browser Tests / Firefox (1) > Browser Tests / Chrome (1) > Browser Tests / Internet Explorer (1) > Browser Tests / Safari (1) Then expand some of the entries:   > Browser Tests / Firefox (1) > appstore.TestThisWillFailAbunch > Browser Tests / Chrome (1) > appstore.TestThisWillFailAbunch > Browser Tests / Internet Explorer (1) > Browser Tests / Safari (1) This grouping would make it slightly easier to parse the data when a large amount of test cases fail. For example, imagine that there are 500 other test cases that fail, TestThisWillFailAbunc [1..500] . In this scenario everything will be sorted in the order: > Browser Tests / Firefox - appstore.TestThisWillFailAbunch1 > Browser Tests / Firefox - appstore.TestThisWillFailAbunch2 > Browser Tests / Firefox - appstore.TestThisWillFailAbunch3 ... > Browser Tests / Firefox - appstore.TestThisWillFailAbunch500 > Browser Tests / Chrome - appstore.TestThisWillFailAbunch1 ... > Browser Tests / Chrome - appstore.TestThisWillFailAbunch500 ... Because the list is fully expanded, and it takes a long time to scroll, it is hard to answer simple questions like "Did it fail on all browsers, or just on Firefox?" The implementation doesn't have to be as I laid it. Just more or less the general idea that opening the results page and getting bombarded with 10000+ failing tests isn't great. And I think with the association of Junit to a stage it could finally be a good way to collapse it a bit. Sorry for the long post, wanted to drop by 2c    

          James Dumay added a comment -

          4kochi thanks for the feedback. In this iteration, we will be providing them as a flat list.

          James Dumay added a comment - 4kochi thanks for the feedback. In this iteration, we will be providing them as a flat list.

          Karl Shultz added a comment - - edited

          Testing Notes:

          Karl Shultz added a comment - - edited Testing Notes: As stated in the description, unit tests should be included Automated tests should also be included. Update: Tests were provided in the PR https://github.com/jenkinsci/blueocean-plugin/pull/1280/files#diff

          Tests still don't get seperated for us. We are using the XUnit plugin to submit the test results:

          pipeline {
            agent none
            stages {
              parallel {
                stage('Windows') {
                  agent {
                    label 'windows'
                  }
                  stages
                  {
                    stage('Build')
                    {
                     //build steps
                    }
                    stage('Test')
                    {
                      steps {
                        ctest(installation: 'InSearchPath', arguments: '-j 32 --output-on-failure --no-compress-output -T Test -T Submit', workingDir: '../build/MES-build', ignoredExitCodes: '0-255')
                      }
                      post {
                      always {
                        step([$class: 'XUnitBuilder',
                          thresholds: [
                                              [$class: 'SkippedThreshold', failureThreshold: '0'],
                                              [$class: 'FailedThreshold', failureThreshold: '10']],
                                            tools: [[$class: 'CTestType', pattern: 'TestReport/*.xml']]])
          
          
                        }
                      }
                    }
                  }
                }
                stage('Linux') {
                  agent {
                    label 'linux'
                  }
                  stages
                  {
                    stage('Build')
                    {
                     //build steps
                    }
                    stage('Test')
                    {
                      steps {
                        ctest(installation: 'InSearchPath', arguments: '-j 32 --output-on-failure --no-compress-output -T Test -T Submit', workingDir: '../build/MES-build', ignoredExitCodes: '0-255')
                      }
                      post {
                      always {
                        step([$class: 'XUnitBuilder',
                          thresholds: [
                                              [$class: 'SkippedThreshold', failureThreshold: '0'],
                                              [$class: 'FailedThreshold', failureThreshold: '10']],
                                            tools: [[$class: 'CTestType', pattern: 'TestReport/*.xml']]])
          
          
                        }
                      }
                    }
                  }
              }  
            }
          }

          This is completly making Jenkins unusable for us at the moment. We don't know if tests are failing on windows or linux, which is quite a big deal...

          Stephan Vedder added a comment - Tests still don't get seperated for us. We are using the XUnit plugin to submit the test results: pipeline { agent none stages { parallel { stage( 'Windows' ) { agent { label 'windows' } stages { stage( 'Build' ) { //build steps } stage( 'Test' ) { steps { ctest(installation: 'InSearchPath' , arguments: '-j 32 --output-on-failure --no-compress-output -T Test -T Submit' , workingDir: '../build/MES-build' , ignoredExitCodes: '0-255' ) } post { always { step([$class: 'XUnitBuilder' , thresholds: [ [$class: 'SkippedThreshold' , failureThreshold: '0' ], [$class: 'FailedThreshold' , failureThreshold: '10' ]], tools: [[$class: 'CTestType' , pattern: 'TestReport/*.xml' ]]]) } } } } } stage( 'Linux' ) { agent { label 'linux' } stages { stage( 'Build' ) { //build steps } stage( 'Test' ) { steps { ctest(installation: 'InSearchPath' , arguments: '-j 32 --output-on-failure --no-compress-output -T Test -T Submit' , workingDir: '../build/MES-build' , ignoredExitCodes: '0-255' ) } post { always { step([$class: 'XUnitBuilder' , thresholds: [ [$class: 'SkippedThreshold' , failureThreshold: '0' ], [$class: 'FailedThreshold' , failureThreshold: '10' ]], tools: [[$class: 'CTestType' , pattern: 'TestReport/*.xml' ]]]) } } } } } } } This is completly making Jenkins unusable for us at the moment. We don't know if tests are failing on windows or linux, which is quite a big deal...

            abayer Andrew Bayer
            jamesdumay James Dumay
            Votes:
            4 Vote for this issue
            Watchers:
            21 Start watching this issue

              Created:
              Updated:
              Resolved: