Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-58085

BlueOcean UI stuck in "Waiting for run to start"

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Blocker Blocker
    • blueocean-plugin
    • None
    • Jenkins 2.180, BlueOcean 1.17.0
    • blue-ocean 1.19.0

      We recently upgraded BlueOcean from 1.16.0 to 1.17.0 and we started observing a weird behaviour in the BlueOcean pipeline UI.

      Frequently (not always) the pipeline UI stops updating the progress while the pipeline is running and the UI is stuck at "Waiting for run to start" (see attached screenshot). When it happens, it does not recover until the pipeline execution completes: once completed, the UI is correctly updated (all steps are green).

      We've also noticed that - when happens - the underlying requests sent by the browser to the endpoint https://jenkins.DOMAIN/blue/rest/organizations/jenkins/pipelines/PROJECT/branches/master/runs/ID/nodes/ID/steps/ always return an empty array "[]" instead of the expected array of steps. On the contrary, during the execution of the pipeline, if we look at the "Console Output" (old Jenkins UI) we can correctly see the progress of the pipeline even when the BlueOcean UI is stuck at "Waiting for run to start".

      This issue looks disappear if we rollback all BlueOcean plugins from 1.17.0 to 1.16.0.

        1. jenkins_build2.mov
          1.07 MB
        2. jenkins_build2.png
          jenkins_build2.png
          96 kB
        3. jenkins_build1.mov
          970 kB
        4. jenkins_build1.png
          jenkins_build1.png
          89 kB
        5. Screenshot 2019-10-17 at 10.08.17.png
          Screenshot 2019-10-17 at 10.08.17.png
          16 kB
        6. screenshot_2019-06-18_at_14.52.11.png
          screenshot_2019-06-18_at_14.52.11.png
          116 kB

          [JENKINS-58085] BlueOcean UI stuck in "Waiting for run to start"

          I'm running into this same exact issue as well after upgrading to BlueOcean 1.17.0 (currently on Jenkins 2.181)

           

           

          Diego Rodriguez added a comment - I'm running into this same exact issue as well after upgrading to BlueOcean 1.17.0 (currently on Jenkins 2.181)    

          Pietro Pepe added a comment -

          Also in our infrastructure we get the same issue using BlueOcean 1.17.0 (on Jenkins 2.176.1)

          Pietro Pepe added a comment - Also in our infrastructure we get the same issue using BlueOcean 1.17.0 (on Jenkins 2.176.1)

          Jonathan B added a comment - - edited

          We're also experiencing this after an upgrade to BlueOcean 1.17.0 and Jenkins 2.176.1. I filed https://issues.jenkins-ci.org/browse/JENKINS-58145 about it.

          I tried downgrading BlueOcean  back to 1.16.0 but that didn't actually help. I also tried downgrading all the pipeline-model* plugins from 1.39 back to 1.38 (which was the version we were on when this was working properly), but that also did not help.

          From my hasty testing, my best guess right now is that the issue was introduced by one of workflow-step-api:2.20 (2.19 was fine), workflow-durable-task-step:2.31 (2.30 was fine), or workflow-cps:2.70 (2.69 was fine). Those are tricky to downgrade because there is some complicated web of dependencies that require us to be on the latest of all of them.

          Since you mention that downgrading BlueOcean to 1.16 fixed it for you, I will have to try that again.

          Jonathan B added a comment - - edited We're also experiencing this after an upgrade to BlueOcean 1.17.0 and Jenkins 2.176.1. I filed https://issues.jenkins-ci.org/browse/JENKINS-58145  about it. I tried downgrading BlueOcean  back to 1.16.0 but that didn't actually help. I also tried downgrading all the pipeline-model* plugins from 1.39 back to 1.38 (which was the version we were on when this was working properly), but that also did not help. From my hasty testing, my best guess right now is that the issue was introduced by one of workflow-step-api:2.20 (2.19 was fine), workflow-durable-task-step:2.31 (2.30 was fine), or workflow-cps:2.70 (2.69 was fine). Those are tricky to downgrade because there is some complicated web of dependencies that require us to be on the latest of all of them. Since you mention that downgrading BlueOcean to 1.16 fixed it for you, I will have to try that again.

          jonathanb1 Downgrading to BlueOcean 1.16 worked for us. Pay attention that - to us - to correctly downgrade we had to uninstall all BlueOcean 1.17 plugins first, and then re-installed all of them (one by one) on 1.16 version without using the "blueocean:1.16" meta package which was re-installing the 1.17 version of plugins. Don't know if there's an easier way, or this downgrade issue was due to our setup (we didn't investigate much cause we were buried with work).

          Marco Pracucci added a comment - jonathanb1 Downgrading to BlueOcean 1.16 worked for us. Pay attention that - to us - to correctly downgrade we had to uninstall all BlueOcean 1.17 plugins first, and then re-installed all of them (one by one) on 1.16 version without using the "blueocean:1.16" meta package which was re-installing the 1.17 version of plugins. Don't know if there's an easier way, or this downgrade issue was due to our setup (we didn't investigate much cause we were buried with work).

          Jonathan B added a comment - - edited

          pracucci thank you. I revisited this and downgrading to Blue Ocean 1.16 did resolve the issue here as well. When I tried initially, I had downgraded only the metapackage `blueocean`, but all of the subpackages were still on 1.17.

          Jonathan B added a comment - - edited pracucci thank you. I revisited this and downgrading to Blue Ocean 1.16 did resolve the issue here as well. When I tried initially, I had downgraded only the metapackage `blueocean`, but all of the subpackages were still on 1.17.

          Reinhold Füreder added a comment - - edited

          We are also experiencing this huge problem (but not all the time though!?) – but are hesitating to downgrade, as 1.17 contains one really, really helpful new feature: JENKINS-39203 (dnusbaum Based on the BlueOcean changelog one naively assumes that this issue is actually caused by JENKINS-39203 => therefore I naively linked this issues)

          Reinhold Füreder added a comment - - edited We are also experiencing this huge problem (but not all the time though!?) – but are hesitating to downgrade, as 1.17 contains one really, really helpful new feature: JENKINS-39203 ( dnusbaum Based on the BlueOcean changelog one naively assumes that this issue is actually caused by JENKINS-39203 => therefore I naively linked this issues)

          Same here. Blue Ocean is not working.

          Jenkins ver. 2.165 and Blue Ocean 1.17

           

          Tobias Honacker added a comment - Same here. Blue Ocean is not working. Jenkins ver. 2.165 and Blue Ocean 1.17  

          I don't know if it's related, but we started seeing this message show up randomly in our Jenkins build log:
          sh: line 1: 3550 Terminated sleep 3
          I could not trace the "sleep 3" command back to any script in our build pipeline.

           

          Russell Morrisey added a comment - I don't know if it's related, but we started seeing this message show up randomly in our Jenkins build log: sh: line 1: 3550 Terminated sleep 3 I could not trace the "sleep 3" command back to any script in our build pipeline.  

          Devin Nusbaum added a comment - - edited

          Based on the BlueOcean changelog one naively assumes that this issue is actually caused by JENKINS-39203

          I'd be surprised if that was related, any problems with that change should just cause the wrong result status for a single stage, so the fact that in the description the API is returning an empty array for the steps in a stage makes me think something else is broken. Looking at the changelog, I suspect it is related to the fix for JENKINS-53816, especially given some of the comments on that ticket mentioning that it might have made things worse in some cases.

          rmorrise I think you are running into JENKINS-55308, which is unrelated as far as I know.

          Devin Nusbaum added a comment - - edited Based on the BlueOcean changelog one naively assumes that this issue is actually caused by JENKINS-39203 I'd be surprised if that was related, any problems with that change should just cause the wrong result status for a single stage, so the fact that in the description the API is returning an empty array for the steps in a stage makes me think something else is broken. Looking at the changelog , I suspect it is related to the fix for JENKINS-53816 , especially given some of the comments on that ticket mentioning that it might have made things worse in some cases. rmorrise I think you are running into  JENKINS-55308 , which is unrelated as far as I know.

          dnusbaum Sorry, I guess you are right => I'll adapt the issue links

          Reinhold Füreder added a comment - dnusbaum Sorry, I guess you are right => I'll adapt the issue links

          Elliot Graebert added a comment - - edited

          I'm also running into the same issue, which I also commented on here: https://issues.jenkins-ci.org/browse/JENKINS-49131

          This issue is very frustrating, as it makes the entire CI pipeline look like it's hung.

           

          Jenkins 2.187 and Blue Ocean 1.18.0

          Elliot Graebert added a comment - - edited I'm also running into the same issue, which I also commented on here:  https://issues.jenkins-ci.org/browse/JENKINS-49131 This issue is very frustrating, as it makes the entire CI pipeline look like it's hung.   Jenkins 2.187 and Blue Ocean 1.18.0

          We updated our plugins over the weekend, but we are still unable to see the input stage to approve PROD deployments.

          This is a show-stopper for us. We are dropping all usage of Blue Ocean (except for new pipeline setup) until it's resolved.

           

          Russell Morrisey added a comment - We updated our plugins over the weekend, but we are still unable to see the input stage to approve PROD deployments. This is a show-stopper for us. We are dropping all usage of Blue Ocean (except for new pipeline setup) until it's resolved.  

          With jenkins 2.164.1, after updating to 1.17.0 and 1.18 .0 subsequently, we are facing the same issue. We have multiple product versioning and deployment pipelines with when-conditions that alter the stage behaviour. Most pipelines have now been affected byt this problem where stages after when-conditon stages just appear dead with the 'waiting' message.

          Dmitry Seryogin added a comment - With jenkins 2.164.1, after updating to 1.17.0 and 1.18 .0 subsequently, we are facing the same issue. We have multiple product versioning and deployment pipelines with when-conditions that alter the stage behaviour. Most pipelines have now been affected byt this problem where stages after when-conditon stages just appear dead with the 'waiting' message.

          Same here. No input steps are shown. We are not able to finish a pipeline. This is a desaster.

          Unknown Unknown added a comment - Same here. No input steps are shown. We are not able to finish a pipeline. This is a desaster.

          After reading through all the comments I came to the conclusion that Blue Ocean must be dead and abandoned. The issue is over 2 months old and breaks Blue Ocean completly. How is it possible that nobody has fixed this the minute after it was reported.

          Unknown Unknown added a comment - After reading through all the comments I came to the conclusion that Blue Ocean must be dead and abandoned. The issue is over 2 months old and breaks Blue Ocean completly. How is it possible that nobody has fixed this the minute after it was reported.

          Devin Nusbaum added a comment -

          I filed a PR that should fix at least some variants of this issue: https://github.com/jenkinsci/blueocean-plugin/pull/2017. I think the main ways to hit this bug are when the Pipeline's execution path in terms of steps/stages changes from one run to the next (for example if a when condition is activated in one build but not in the next, or perhaps if you have a Scripted Pipeline that does something like when using Groovy, or if you changed the Jenkinsfile manually). If anyone has simple reproducer (just a Jenkinsfile that runs without needing to configure anything special in Jenkins) that does not involve any of those things (or even if it does involve those things), I would be interested to see it to check if my patch fixes it or not.

          Devin Nusbaum added a comment - I filed a PR that should fix at least some variants of this issue: https://github.com/jenkinsci/blueocean-plugin/pull/2017 . I think the main ways to hit this bug are when the Pipeline's execution path in terms of steps/stages changes from one run to the next (for example if a when condition is activated in one build but not in the next, or perhaps if you have a Scripted Pipeline that does something like when using Groovy, or if you changed the Jenkinsfile manually). If anyone has simple reproducer (just a Jenkinsfile that runs without needing to configure anything special in Jenkins) that does not involve any of those things (or even if it does involve those things), I would be interested to see it to check if my patch fixes it or not.

          Devin Nusbaum added a comment -

          Again, any additional information that anyone has on specific Pipelines that reproduce the issue would be welcome so that I can investigate how my proposed changes will affect them.

          Devin Nusbaum added a comment - Again, any additional information that anyone has on specific Pipelines that reproduce the issue would be welcome so that I can investigate how my proposed changes will affect them.

          Hey Devin,

          So we were able to consistently reproduce the issue on a single Pipeline (it would fail in this way every time). We weren't able to reproduce by running the pipeline elsewhere, which is weird. We deleted the Job and all history, and then the issue went away. It sounds like there may be some connection with this bug related to something that is stored persistently. Which I know isn't super helpful.

          I'm keeping my eye out for a future occurrence of the issue. If we see it again, what can I grab out of the persistent data that would effect the Blue Ocean UI?

          Elliot Graebert added a comment - Hey Devin, So we were able to consistently reproduce the issue on a single Pipeline (it would fail in this way every time). We weren't able to reproduce by running the pipeline elsewhere, which is weird. We deleted the Job and all history, and then the issue went away. It sounds like there may be some connection with this bug related to something that is stored persistently. Which I know isn't super helpful. I'm keeping my eye out for a future occurrence of the issue. If we see it again, what can I grab out of the persistent data that would effect the Blue Ocean UI?

          Devin Nusbaum added a comment - - edited

          elliotg Blue Ocean tries to combine in-progress builds with the last successful build of the project so that it can try to predict what the graph will look like, rather than only showing the progress based on the current build. Problems happen when the flow of execution from the last successful build is slightly different than the in-progress build, but similar enough that Blue Ocean still tries to merge the graphs (for example because of a when whose condition was true in the last build but not this build).

          If you see it again, the minimum data to include would be your Jenkinsfile (generally only the overall structure matters, the exact steps you run are not important). Ideally you would be able to upload the build folders of the last successful build and the current build ($JENKINS_HOME/jobs/$JOB_NAME/builds/$BUILD_NUMBER) so we can compare the FlowNode}}s in their {{workflow folders which contains the exact data that Blue Ocean is using to create the visualization.

          Devin Nusbaum added a comment - - edited elliotg Blue Ocean tries to combine in-progress builds with the last successful build of the project so that it can try to predict what the graph will look like, rather than only showing the progress based on the current build. Problems happen when the flow of execution from the last successful build is slightly different than the in-progress build, but similar enough that Blue Ocean still tries to merge the graphs (for example because of a when whose condition was true in the last build but not this build). If you see it again, the minimum data to include would be your Jenkinsfile (generally only the overall structure matters, the exact steps you run are not important). Ideally you would be able to upload the build folders of the last successful build and the current build ( $JENKINS_HOME/jobs/$JOB_NAME/builds/$BUILD_NUMBER ) so we can compare the FlowNode}}s in their {{workflow folders which contains the exact data that Blue Ocean is using to create the visualization.

          Dmitry Seryogin added a comment - - edited

          Whenever below pipeline (some steps taken out of earlier stages) that we use to deploy releases into a target environment runs and Delay is not 0, the Wait step completes successfully but the following Deploy stage then gets affected by the bug in question. If Delay is skipped due to the when-condition (be it a re-run or a 0 min delay), the pipeline then works as intended in BlueOcean.

          Edit: The problem surfaces consistently on each and every run against the test envrionments where 5min delay is enforced after having deployed to predelivery (where delay is 0). 

          pipeline {
          	agent none
          	parameters {
          		gitParameter name: 'release',
          			type: 'PT_TAG',
          			branchFilter: 'origin/(.*)',
          			tagFilter: 'v*',
          			sortMode: 'DESCENDING_SMART',
          			selectedValue: 'TOP',
          			useRepository: 'repo'
          			quickFilterEnabled: true,
          			listSize: '5',
          			description: 'git tag'
          
          		choice name: 'environment',
          			choices: getAdminNodes('admin'),
          			description: 'target deployment environment'
          
          		choice name: 'delay',
          			choices: ['0 min', '1 min', '5 min', '10 min', '15 min'],
          			description: 'delay before starting the deploy phase'
          
          		choice name: 'delta_run',
          			choices: ['Yes','No'],
          			description: 'install only the delta compared to prior release)'
          
          		booleanParam name: 'deploy_validation',
          			defaultValue: false,
          			description: 'issue additional sanity checks during deploy phase'
          	}
          	options {
          		buildDiscarder(logRotator(numToKeepStr: '5', daysToKeepStr: '30'))
          		timeout(time: 2, unit: 'HOURS')
          		skipStagesAfterUnstable()
          		timestamps()
          		skipDefaultCheckout true
          	}
          	stages {
          		stage  ('SCM') {
          			agent { label 'rel-sbox-pup-a01' }
          			steps {
          				cleanWs notFailBuild: true
          				checkout([
          					$class: 'GitSCM',
          					branches: [[  name: "refs/tags/${params.release}" ]],
          					userRemoteConfigs: [[
          						credentialsId: 'id',
          						url: 'repo'
          					]]
          				])
          			}
          		}
          		stage ('Prepare') {
          			agent { label params.environment }
          			steps {
          				script {
          					String node = env.environment.split('\\-')[1].toUpperCase()
          					if ('CHI'.equals(node) || 'RHO'.equals(node)) {
          						env.delay = '5 min'
          					} else {
          						env.delay = '0 min'
          						println 'Not a test environment - commence immediate deployment'
          					}
          					readFile('/dsa/versions.ctl').split("\r?\n").each { String line ->
          						if (line.equals(env.release)) {
          							println env.release + ' is a known version for ' + env.environment + ', skip project notification'
          							env.delay = '0 min'
          						}
          					}
          					currentBuild.displayName = "#${BUILD_NUMBER} - " + node + " - ${params.release}"
          				}
          			}
          		}
          		stage ('Package') {
          			agent { label 'rel-sbox-pup-a01' }
          			steps {
          				sh label: 'Gather Artefacts', script:
          				"""
          					if [ -z ${M2_HOME} ]; then
          						echo "Maven not configured, abort further actions"
          						exit 1
          					fi
          					mvn -version
          					env_id=`echo ${environment} | cut -d '-' -f2`
          					case \$env_id in
          						prede) mount_point_id="new";;
          							*) mount_point_id=\$env_id
          					esac
          					mount_point=/dsa-\$mount_point_id
          					${WORKSPACE}/tools/mvn/svc_fetch_delivery.sh \$mount_point
          				"""
          			}
          		}
          		stage ('Preflight') {
          			agent { label params.environment }
          			steps {
          				script {
          					env.DELIVERY_DIR = sh(label: 'Delivery Path',
          						script:
          						"""
          							ver_prefix=`echo ${release} | awk -F'_' '{print \$1 "_" \$2 "_next_"}'`
          							delivery_base="/dsa/infrastructure"
          							if [ ${deploy_validation} = "true" ]; then
          								delivery_dir=\$delivery_base/delivery/${release}
          							else
          								delivery_type="full"
          								if [ ${delta_run} = "Yes" ]; then
          									delivery_type="delta"
          								fi
          								delivery_dir=\$delivery_base/delivery/\$ver_prefix\$delivery_type
          							fi
          							echo \$delivery_dir
          						"""
          						, returnStdout: true
          					).trim()
          				}
          				sh label: 'Gather Prerequisites', script:
          				"""
          					echo ${PWD}
          					cd ${DELIVERY_DIR}
          					${DELIVERY_DIR}/delivery.rec.0-2.prereq.load.sh
          				"""
          			}
          		}
          		stage ('Delay') {
          			agent { label params.environment }
          			steps {
          				sh label: 'Wait', script:
          				"""
          					num=`echo ${delay} | cut -d ' ' -f1`
          					if [ \$num -ne 0 ]; then
          						sleep=\$((num * 60))
          						sleep \$sleep
          					fi
          				"""
          			}
          			when {
          				allOf {
          					not {
          						environment name: 'delay',
          						ignoreCase: true,
          						value: '0 min'
          					}
          					not {
          						isRestartedRun()
          					}
          				}
          			}
          		}
          		stage ('Deploy') {
          			agent { label params.environment }
          			steps {
          				withCredentials([
          				usernamePassword(
          					credentialsId: 'id',
          					passwordVariable: 'FMW_PASSWORD',
          					usernameVariable: 'FMW_USER')
          				]) {
          					sh label: 'Install', script:
          					"""
          						cd ${DELIVERY_DIR}
          						${DELIVERY_DIR}/install.sh
          					"""
          				}
          			}
          		}
          	}
          	post {
          		success {
          			build job: 'ADM.DATA_SYNC', propagate: false
          		}
          	}
          }
          

          Dmitry Seryogin added a comment - - edited Whenever below pipeline (some steps taken out of earlier stages) that we use to deploy releases into a target environment runs and Delay is not 0, the Wait step completes successfully but the following Deploy stage then gets affected by the bug in question. If Delay is skipped due to the when-condition (be it a re-run or a 0 min delay), the pipeline then works as intended in BlueOcean. Edit: The problem surfaces consistently on each and every run against the test envrionments where 5min delay is enforced after having deployed to predelivery (where delay is 0).  pipeline { agent none parameters { gitParameter name: 'release' , type: 'PT_TAG' , branchFilter: 'origin/(.*)' , tagFilter: 'v*' , sortMode: 'DESCENDING_SMART' , selectedValue: 'TOP' , useRepository: 'repo' quickFilterEnabled: true , listSize: '5' , description: 'git tag' choice name: 'environment' , choices: getAdminNodes( 'admin' ), description: 'target deployment environment' choice name: 'delay' , choices: [ '0 min' , '1 min' , '5 min' , '10 min' , '15 min' ], description: 'delay before starting the deploy phase' choice name: 'delta_run' , choices: [ 'Yes' , 'No' ], description: 'install only the delta compared to prior release)' booleanParam name: 'deploy_validation' , defaultValue: false , description: 'issue additional sanity checks during deploy phase' } options { buildDiscarder(logRotator(numToKeepStr: '5' , daysToKeepStr: '30' )) timeout(time: 2, unit: 'HOURS' ) skipStagesAfterUnstable() timestamps() skipDefaultCheckout true } stages { stage ( 'SCM' ) { agent { label 'rel-sbox-pup-a01' } steps { cleanWs notFailBuild: true checkout([ $class: 'GitSCM' , branches: [[ name: "refs/tags/${params.release}" ]], userRemoteConfigs: [[ credentialsId: 'id' , url: 'repo' ]] ]) } } stage ( 'Prepare' ) { agent { label params.environment } steps { script { String node = env.environment.split( '\\-' )[1].toUpperCase() if ( 'CHI' .equals(node) || 'RHO' .equals(node)) { env.delay = '5 min' } else { env.delay = '0 min' println 'Not a test environment - commence immediate deployment' } readFile( '/dsa/versions.ctl' ).split( "\r?\n" ).each { String line -> if (line.equals(env.release)) { println env.release + ' is a known version for ' + env.environment + ', skip project notification' env.delay = '0 min' } } currentBuild.displayName = "#${BUILD_NUMBER} - " + node + " - ${params.release}" } } } stage ( 'Package' ) { agent { label 'rel-sbox-pup-a01' } steps { sh label: 'Gather Artefacts' , script: """ if [ -z ${M2_HOME} ]; then echo "Maven not configured, abort further actions" exit 1 fi mvn -version env_id=`echo ${environment} | cut -d '-' -f2` case \$env_id in prede) mount_point_id= " new " ;; *) mount_point_id=\$env_id esac mount_point=/dsa-\$mount_point_id ${WORKSPACE}/tools/mvn/svc_fetch_delivery.sh \$mount_point """ } } stage ( 'Preflight' ) { agent { label params.environment } steps { script { env.DELIVERY_DIR = sh(label: 'Delivery Path' , script: """ ver_prefix=`echo ${release} | awk -F '_' '{print \$1 "_" \$2 "_next_" }' ` delivery_base= "/dsa/infrastructure" if [ ${deploy_validation} = " true " ]; then delivery_dir=\$delivery_base/delivery/${release} else delivery_type= "full" if [ ${delta_run} = "Yes" ]; then delivery_type= "delta" fi delivery_dir=\$delivery_base/delivery/\$ver_prefix\$delivery_type fi echo \$delivery_dir """ , returnStdout: true ).trim() } sh label: 'Gather Prerequisites' , script: """ echo ${PWD} cd ${DELIVERY_DIR} ${DELIVERY_DIR}/delivery.rec.0-2.prereq.load.sh """ } } stage ( 'Delay' ) { agent { label params.environment } steps { sh label: 'Wait' , script: """ num=`echo ${delay} | cut -d ' ' -f1` if [ \$num -ne 0 ]; then sleep=\$((num * 60)) sleep \$sleep fi """ } when { allOf { not { environment name: 'delay' , ignoreCase: true , value: '0 min' } not { isRestartedRun() } } } } stage ( 'Deploy' ) { agent { label params.environment } steps { withCredentials([ usernamePassword( credentialsId: 'id' , passwordVariable: 'FMW_PASSWORD' , usernameVariable: 'FMW_USER' ) ]) { sh label: 'Install' , script: """ cd ${DELIVERY_DIR} ${DELIVERY_DIR}/install.sh """ } } } } post { success { build job: 'ADM.DATA_SYNC' , propagate: false } } }

          Devin Nusbaum added a comment -

          timewalker75a Thanks! That looks like the problem I described with when, where the value of the condition changes from run to run, so it should be covered by my patch. I think the simplest reproduction of that kind of issue is a Pipeline like this:

          pipeline {
              stages {
                  stage('First') {
                      steps { sleep 10 }
                  }
                  stage('Second') {
                      when { expression { (currentBuild.number % 0) == 0 } } // Run on even builds, skip on odd builds.
                      steps { sleep 10 }
                  }
                  stage('Third') {
                      steps { sleep 10 }
                  }
              }
          }
          

          The "Second" stage should show the bug on every build after the first one.

          Devin Nusbaum added a comment - timewalker75a Thanks! That looks like the problem I described with when , where the value of the condition changes from run to run, so it should be covered by my patch. I think the simplest reproduction of that kind of issue is a Pipeline like this: pipeline { stages { stage( 'First' ) { steps { sleep 10 } } stage( 'Second' ) { when { expression { (currentBuild.number % 0) == 0 } } // Run on even builds, skip on odd builds. steps { sleep 10 } } stage( 'Third' ) { steps { sleep 10 } } } } The "Second" stage should show the bug on every build after the first one.

          Angelo Loria added a comment -

          I am seeing this issue with every pipeline run; all pipelines are Bitbucket Branch Source jobs. 

          Pipeline example w/ all details removed. This pipeline is called by the jenkinsfile in the solution.

           

          // this code allows for entire pipeline to be called from jenkinsfile in solution
          def call(body) {
          // evaluate the body block, and collect configuration into the object
          def params = [:]
          body.resolveStrategy = Closure.DELEGATE_FIRST
          body.delegate = params
          body()

          def deployUtils = new DeployUtils(this)
          def gitUtils = new GitUtils(this)
          def jiraUtils = new JiraUtils(this)

          // sets parameters ahead of pipeline being executed
          node('master') {
          stage('Gathering Parameters for Build') {
          switch(env.branch_name)

          Unknown macro: { case ~/hotfix.*/}

          }
          }

          pipeline {
          agent {
          label "${agentLabel}"
          }
          options

          Unknown macro: { timestamps() disableConcurrentBuilds() }

          stages {
          stage('Building sln') {
          }
          stage('Publishing') {
          }
          }
          post {
          always {
          }
          failure {
          }
          success {

          }
          cleanup {
          }
          }
          }
          }
           

          Angelo Loria added a comment - I am seeing this issue with every pipeline run; all pipelines are Bitbucket Branch Source jobs.  Pipeline example w/ all details removed. This pipeline is called by the jenkinsfile in the solution.   // this code allows for entire pipeline to be called from jenkinsfile in solution def call(body) { // evaluate the body block, and collect configuration into the object def params = [:] body.resolveStrategy = Closure.DELEGATE_FIRST body.delegate = params body() def deployUtils = new DeployUtils(this) def gitUtils = new GitUtils(this) def jiraUtils = new JiraUtils(this) // sets parameters ahead of pipeline being executed node('master') { stage('Gathering Parameters for Build') { switch(env.branch_name) Unknown macro: { case ~/hotfix.*/} } } pipeline { agent { label "${agentLabel}" } options Unknown macro: { timestamps() disableConcurrentBuilds() } stages { stage('Building sln') { } stage('Publishing') { } } post { always { } failure { } success { } cleanup { } } } }  

           think the simplest reproduction of that kind of issue is a Pipeline like this

          Given you meant modulo 2 rather than 0, yeah, that does reproduce the problem just right - 3rd step is sitting there with 'waiting for run to start' until the step actually completes and the node renders with the green tick.

          Dmitry Seryogin added a comment -  think the simplest reproduction of that kind of issue is a Pipeline like this Given you meant modulo 2 rather than 0, yeah, that does reproduce the problem just right - 3rd step is sitting there with 'waiting for run to start' until the step actually completes and the node renders with the green tick.

          Devin Nusbaum added a comment -

          Blue Ocean 1.19.0 was just released with a fix for at least some aspects of this issue. Please try it out, and if you are still seeing the problem, post a minimal Jenkinsfile that reproduces the behavior you are seeing.

          Devin Nusbaum added a comment - Blue Ocean 1.19.0 was just released with a fix for at least some aspects of this issue. Please try it out, and if you are still seeing the problem, post a minimal Jenkinsfile that reproduces the behavior you are seeing.

          Bas Broere added a comment - - edited

          Sorry, not in the position to post a minimal jenkinsfile, but still the problem persists in the pipeline added as attachment.
          Version Blue Ocean: 1.19.0 · Core 2.199 · e743640 · 4th September 2019 01:20 AM

          Bas Broere added a comment - - edited Sorry, not in the position to post a minimal jenkinsfile, but still the problem persists in the pipeline added as attachment. Version Blue Ocean: 1.19.0 · Core 2.199 · e743640 · 4th September 2019 01:20 AM

          Can we please have this re-opened.  I can confirm it's still happening here on 1.19.0 also.

          This together with stage-view being pretty crippled means finding logs of currently-running pipelines (i.e. in Pipeline Steps) is pretty cumbersome even for a Jenkins pro.  Pretty much unusable for casual users with it's lack of collapsibility, etc.

          Brian J Murrell added a comment - Can we please have this re-opened.  I can confirm it's still happening here on 1.19.0 also. This together with stage-view being pretty crippled means finding logs of currently-running pipelines (i.e. in Pipeline Steps) is pretty cumbersome even for a Jenkins pro.  Pretty much unusable for casual users with it's lack of collapsibility, etc.

          Devin Nusbaum added a comment -

          bbroere brianjmurrell Do you have a minimal and/or independent Jenkinsfile that reproduces the issue you are seeing, or can you at least post the Jenkinsfile that you do have? Without more details, there isn't really much we can do.

          Devin Nusbaum added a comment - bbroere brianjmurrell Do you have a minimal and/or independent Jenkinsfile that reproduces the issue you are seeing, or can you at least post the Jenkinsfile that you do have? Without more details, there isn't really much we can do.

          https://raw.githubusercontent.com/daos-stack/daos/master/Jenkinsfile often exhibits the problem in the Functional/Functional_Hardware stages.

          Brian J Murrell added a comment - https://raw.githubusercontent.com/daos-stack/daos/master/Jenkinsfile  often exhibits the problem in the Functional / Functional_Hardware stages.

          The issue repros nearly 100% of the time using this pipeline when a "buddy build" runs.

          Key details:

          • Jenkins 2.204.1
          • Blue Ocean 1.21.0
          • Amazon EC2 plugin 1.47
          • ec2-android-toolchain: this is an ec2 instance provisioned using the ec2 plugin
          • Issue repros when a PR is opened, this causing
            • "Release: build" stage is skipped
            • Stages "Buddy: build apks" and "Buddy: jvm verify" execute in parallel
          • "Buddy: build apks" stage always shows "Queued: Waiting for run to start" in BlueOcean, even though it runs and completes.
          • When "Buddy: build apks" stage completes successfully, BlueOcean updates to show all the correct stage information.
          • "Buddy: jvm verify" stage shows progress correctly in BlueOcean
          def buildAgent = 'ec2-android-toolchain'
          
          pipeline {
              agent none
          
              options{
                  timestamps()
                  timeout(time: 2, unit: 'HOURS')
                  parallelsAlwaysFailFast()
              }
          
              parameters {
                  string(
                          defaultValue: "",
                          description: 'If set, reruns the device test stages only,using artifact from specified build id, skipping all other stages',
                          name: "RERUN_DEVICETEST",
                  )
          
                  booleanParam(
                          defaultValue: false,
                          description: 'Publish release build to crashlytics beta',
                          name: "PUBLISH_BETA",
                  )
              }
          
              environment {
                  //Disable gradle daemon
                  GRADLE_OPTS = "-Dorg.gradle.daemon=false"
              }
          
              stages {
                  stage('Jenkins') {
                      parallel {
                          stage('Release: build') {
                              when {
                                  beforeAgent true
                                  equals expected: PipelineMode.RELEASE, actual: getPipelineMode()
                                  expression { !isDeviceTestOnly() }
                              }
          
                              agent {
                                  label buildAgent
                              }
          
                              environment {
                                  // Number of build numbers to allocate when building the project
                                  // (release builds only)
                                  O2_BUILD_VERSION_BLOCK_SIZE = "64"
          
                                  ORG_GRADLE_PROJECT_publishToBeta = "${params.PUBLISH_BETA ? '1' : '0'}"
                              }
          
                              steps {
                                  installJenkinsSshKey()
                                  checkoutRepo(steps)
                                  sh "./other/jenkins/job_release_build.sh"
                              }
          
                              post {
                                  always {
                                      addArtifacts()
                                      processJvmJunitResults()
                                  }
                              }
                          }
          
                          stage('Buddy: build apks') {
                              when {
                                  beforeAgent true
                                  equals expected: PipelineMode.BUDDY, actual: getPipelineMode()
                                  expression { !isDeviceTestOnly() }
                              }
          
                              agent {
                                  label buildAgent
                              }
          
                              steps {
                                  installJenkinsSshKey()
                                  checkoutRepo(steps)
                                  sh '''#!/bin/bash
          
                                      . "other/jenkins/build_lib.sh"
          
                                      "$ST_ROOT/other/jenkins/initialize_build.sh"
          
                                      execGradle --stacktrace jenkinsBuddyStageBuild
          
                                      "$ST_ROOT/other/jenkins/job_build_upload_sharedartifacts.sh"
                                  '''
                              }
          
                              post {
                                  always {
                                      addArtifacts()
                                  }
                              }
                          }
          
                          stage('Buddy: jvm verify') {
                              when {
                                  beforeAgent true
                                  equals expected: PipelineMode.BUDDY, actual: getPipelineMode()
                                  expression { !isDeviceTestOnly() }
                              }
          
                              agent {
                                  label buildAgent
                              }
          
                              steps {
                                  installJenkinsSshKey()
                                  checkoutRepo(steps)
                                  sh '''#!/bin/bash
          
                                      . "other/jenkins/build_lib.sh"
          
                                      "$ST_ROOT/other/jenkins/initialize_build.sh"
          
                                      execGradle --stacktrace jenkinsBuddyStageVerifyAndTest
                                  '''
                              }
          
                              post {
                                  always {
                                      processJvmJunitResults()
                                  }
                              }
                          }
                      }
                  }
              }
          
              post {
                  failure {
                      notifyUnhealthy()
                  }
          
                  fixed {
                      notifyFixed()
                  }
          
                  unstable {
                      notifyUnhealthy()
                  }
              }
          }
          
          def checkoutRepo(steps) {
              steps.sh '''#!/bin/bash
              git submodule init
              git submodule update
              '''
          }
          
          /**
           * This should only be used on non-ec2 agents. Roles are used to properly authorize
           * agents running in ev2
           *
           * @param cl
           * @return
           */
          @SuppressWarnings("GroovyInfiniteRecursion") // arg list is different and doesn't recurse
          def withAwsCredentials(Closure cl) {
              withAwsCredentials('**********', cl)
          }
          
          /**
           * Installs the jenkins ssh key managed by jenkins
           */
          def installJenkinsSshKey() {
              installSshKey('jenkins-ssh-key',
                      "${env.HOME}/.ssh/id_rsa")
          }
          
          /**
           * Installs an ssh managed by jenkins to the specified path
           *
           * @param credentialId
           * @param destPath
           */
          def installSshKey(String credentialId, String destPath) {
              withCredentials([sshUserPrivateKey(
                      keyFileVariable: 'THEKEY',
                      credentialsId: credentialId)]) {
          
                  sh """
                      cp "$THEKEY" "$destPath"
                      chmod 0600 "$destPath"
                  """
              }
          }
          
          enum PipelineMode {
              BUDDY, // Buddy mode for pipeline
              RELEASE, // Release mode for pipeline
              UNKNOWN // Unknown/unsupported mode. All stages skipped.
          }
          
          /**
           * Each jenkins pipeline execution runs in a single mode
           */
          def getPipelineMode() {
              if (env.CHANGE_ID != null) {
                  return PipelineMode.BUDDY
              } else if (isMainlineBranch()) {
                  return PipelineMode.RELEASE
              } else {
                  return PipelineMode.UNKNOWN
              }
          }
          
          /**
           * Returns true if any of the current branches are a mainline branch, otherwise false
           */
          def isMainlineBranch() {
              def mainlines = [ /^master$/, /^release\/.*$/, /^topic\/.*$/ ]
              return null != scm.branches.find { branch ->
                  mainlines.find { mainlineRegEx ->
                      return branch ==~ mainlineRegEx
                  }
              }
          }
          
          /**
           * Return true if only the device test stages should run
           * @return
           */
          def isDeviceTestOnly() {
              return !params.RERUN_DEVICETEST.isEmpty()
          }
          
          /**
           * Returns true if the device test stage(s) should run
           */
          def shouldRunDeviceTest() {
              return getPipelineMode() != PipelineMode.UNKNOWN
          }
          
          /**
           * Given a steps element, configures it to run the given shard indexes
           */
          def defineStepsForDeviceTest(steps, int totalShards, int...shardIndexes) {
              steps.with {
                  checkoutRepo(steps)
          
                  withAwsCredentials {
                      def url = getArtifactShareUrl("build", deviceTestInputBuildId())
                      sh "./other/jenkins/devicetest_ensure_ready.sh '$url'"
                  }
          
                  script {
                      // PR builds merge the PR HEAD into the target branch. In this case
                      // we just want to pass the PR branch to the manual job. If
                      // it isn't a PR build, the pass the commit.
                      def commitToBuild = env.CHANGE_BRANCH?.trim()
                      if (!commitToBuild) {
                          commitToBuild = env.GIT_COMMIT
                      }
          
                      build job: "/Manual Builds/speedtestnet-android/deviceTest",
                              parameters: [
                                      string(name: 'O2_INPUT_URL', value:
                                              getArtifactShareUrl("build", deviceTestInputBuildId())),
                                      string(name: 'O2_OUTPUT_URL', value:
                                              getArtifactShareUrl("deviceTest", deviceTestInputBuildId())),
                                      string(name: 'O2_COMMIT', value: commitToBuild),
                                      string(name: 'O2_BRANCH', value: env.GIT_BRANCH),
                                      string(name: 'O2_SHARD_COUNT', value: "${totalShards}"),
                                      string(name: 'O2_SHARD_INDEXES', value: shardIndexes.join(','))
                              ],
          
                              // Don't fail the stage if downstream job fails. We want to process
                              // test results from unstable builds in our pipeline
                              propagate: false
                  }
              }
          }
          
          /**
           * Build id to use as input for device test stage
           */
          def deviceTestInputBuildId() {
              return params.RERUN_DEVICETEST.isEmpty()
                      ? env.BUILD_ID
                      : params.RERUN_DEVICETEST
          }
          
          /**
           * Get the artifact share path for the given stage.
           * @param stage the stage with which the path is associated
           * @param buildId (optional) build id to use, defaults to current build id
           */
          def getArtifactShareUrl(String stage, String buildId = env.BUILD_ID) {
              return sh(returnStdout: true, script: "./other/jenkins/share_artifacts.sh "
                      + "--action getShareUrl "
                      + "--jobName ${env.JOB_NAME} --jobBuild ${buildId} --jobStage ${stage}"
              ).trim()
          }
          
          /**
           * Process junit results from the jvm tests
           */
          def processJvmJunitResults() {
              junit '**/build/test-results/**/*.xml'
          }
          /**
           * Add the archived apk's as artifacts
           */
          def addArtifacts() {
              archiveArtifacts artifacts:"Mobile4/build/outputs/apk/**/*.apk", fingerprint:true
          }
          
          def notifyUnhealthy() {
              if (getPipelineMode() != PipelineMode.RELEASE) {
                  return
              }
          
              slackSend(channel: '#team-android-dev',
                      color: 'danger',
                      message: "<${currentBuild.absoluteUrl}|${currentBuild.fullDisplayName}>"
                              + " :jenkins_angry:  is unhealthy.")
          }
          
          def notifyFixed() {
              if (getPipelineMode() != PipelineMode.RELEASE) {
                  return
              }
          
              slackSend(channel: '#team-android-dev',
                      color: 'good',
                      message: "<${currentBuild.absoluteUrl}|${currentBuild.fullDisplayName}>"
                              + ":jenkins: is healthy again.")
          }

          Ian Wallace-Hoyt added a comment - The issue repros nearly 100% of the time using this pipeline when a "buddy build" runs. Key details: Jenkins 2.204.1 Blue Ocean 1.21.0 Amazon EC2 plugin 1.47 ec2-android-toolchain: this is an ec2 instance provisioned using the ec2 plugin Issue repros when a PR is opened, this causing "Release: build" stage is skipped Stages "Buddy: build apks" and "Buddy: jvm verify" execute in parallel "Buddy: build apks" stage always shows "Queued: Waiting for run to start" in BlueOcean, even though it runs and completes. When "Buddy: build apks" stage completes successfully, BlueOcean updates to show all the correct stage information. "Buddy: jvm verify" stage shows progress correctly in BlueOcean def buildAgent = 'ec2-android-toolchain' pipeline { agent none options{ timestamps() timeout(time: 2, unit: 'HOURS' ) parallelsAlwaysFailFast() } parameters { string( defaultValue: "", description: 'If set, reruns the device test stages only,using artifact from specified build id, skipping all other stages' , name: "RERUN_DEVICETEST" , ) booleanParam( defaultValue: false , description: 'Publish release build to crashlytics beta' , name: "PUBLISH_BETA" , ) } environment { //Disable gradle daemon GRADLE_OPTS = "-Dorg.gradle.daemon= false " } stages { stage( 'Jenkins' ) { parallel { stage( 'Release: build' ) { when { beforeAgent true equals expected: PipelineMode.RELEASE, actual: getPipelineMode() expression { !isDeviceTestOnly() } } agent { label buildAgent } environment { // Number of build numbers to allocate when building the project // (release builds only) O2_BUILD_VERSION_BLOCK_SIZE = "64" ORG_GRADLE_PROJECT_publishToBeta = "${params.PUBLISH_BETA ? '1' : '0' }" } steps { installJenkinsSshKey() checkoutRepo(steps) sh "./other/jenkins/job_release_build.sh" } post { always { addArtifacts() processJvmJunitResults() } } } stage( 'Buddy: build apks' ) { when { beforeAgent true equals expected: PipelineMode.BUDDY, actual: getPipelineMode() expression { !isDeviceTestOnly() } } agent { label buildAgent } steps { installJenkinsSshKey() checkoutRepo(steps) sh '''#!/bin/bash . "other/jenkins/build_lib.sh" "$ST_ROOT/other/jenkins/initialize_build.sh" execGradle --stacktrace jenkinsBuddyStageBuild "$ST_ROOT/other/jenkins/job_build_upload_sharedartifacts.sh" ''' } post { always { addArtifacts() } } } stage( 'Buddy: jvm verify' ) { when { beforeAgent true equals expected: PipelineMode.BUDDY, actual: getPipelineMode() expression { !isDeviceTestOnly() } } agent { label buildAgent } steps { installJenkinsSshKey() checkoutRepo(steps) sh '''#!/bin/bash . "other/jenkins/build_lib.sh" "$ST_ROOT/other/jenkins/initialize_build.sh" execGradle --stacktrace jenkinsBuddyStageVerifyAndTest ''' } post { always { processJvmJunitResults() } } } } } } post { failure { notifyUnhealthy() } fixed { notifyFixed() } unstable { notifyUnhealthy() } } } def checkoutRepo(steps) { steps.sh '''#!/bin/bash git submodule init git submodule update ''' } /** * This should only be used on non-ec2 agents. Roles are used to properly authorize * agents running in ev2 * * @param cl * @ return */ @SuppressWarnings( "GroovyInfiniteRecursion" ) // arg list is different and doesn't recurse def withAwsCredentials(Closure cl) { withAwsCredentials( '**********' , cl) } /** * Installs the jenkins ssh key managed by jenkins */ def installJenkinsSshKey() { installSshKey( 'jenkins-ssh-key' , "${env.HOME}/.ssh/id_rsa" ) } /** * Installs an ssh managed by jenkins to the specified path * * @param credentialId * @param destPath */ def installSshKey( String credentialId, String destPath) { withCredentials([sshUserPrivateKey( keyFileVariable: 'THEKEY' , credentialsId: credentialId)]) { sh """ cp "$THEKEY" "$destPath" chmod 0600 "$destPath" """ } } enum PipelineMode { BUDDY, // Buddy mode for pipeline RELEASE, // Release mode for pipeline UNKNOWN // Unknown/unsupported mode. All stages skipped. } /** * Each jenkins pipeline execution runs in a single mode */ def getPipelineMode() { if (env.CHANGE_ID != null ) { return PipelineMode.BUDDY } else if (isMainlineBranch()) { return PipelineMode.RELEASE } else { return PipelineMode.UNKNOWN } } /** * Returns true if any of the current branches are a mainline branch, otherwise false */ def isMainlineBranch() { def mainlines = [ /^master$/, /^release\/.*$/, /^topic\/.*$/ ] return null != scm.branches.find { branch -> mainlines.find { mainlineRegEx -> return branch ==~ mainlineRegEx } } } /** * Return true if only the device test stages should run * @ return */ def isDeviceTestOnly() { return !params.RERUN_DEVICETEST.isEmpty() } /** * Returns true if the device test stage(s) should run */ def shouldRunDeviceTest() { return getPipelineMode() != PipelineMode.UNKNOWN } /** * Given a steps element, configures it to run the given shard indexes */ def defineStepsForDeviceTest(steps, int totalShards, int ...shardIndexes) { steps.with { checkoutRepo(steps) withAwsCredentials { def url = getArtifactShareUrl( "build" , deviceTestInputBuildId()) sh "./other/jenkins/devicetest_ensure_ready.sh '$url' " } script { // PR builds merge the PR HEAD into the target branch. In this case // we just want to pass the PR branch to the manual job. If // it isn't a PR build, the pass the commit. def commitToBuild = env.CHANGE_BRANCH?.trim() if (!commitToBuild) { commitToBuild = env.GIT_COMMIT } build job: "/Manual Builds/speedtestnet-android/deviceTest" , parameters: [ string(name: 'O2_INPUT_URL' , value: getArtifactShareUrl( "build" , deviceTestInputBuildId())), string(name: 'O2_OUTPUT_URL' , value: getArtifactShareUrl( "deviceTest" , deviceTestInputBuildId())), string(name: 'O2_COMMIT' , value: commitToBuild), string(name: 'O2_BRANCH' , value: env.GIT_BRANCH), string(name: 'O2_SHARD_COUNT' , value: "${totalShards}" ), string(name: 'O2_SHARD_INDEXES' , value: shardIndexes.join( ',' )) ], // Don't fail the stage if downstream job fails. We want to process // test results from unstable builds in our pipeline propagate: false } } } /** * Build id to use as input for device test stage */ def deviceTestInputBuildId() { return params.RERUN_DEVICETEST.isEmpty() ? env.BUILD_ID : params.RERUN_DEVICETEST } /** * Get the artifact share path for the given stage. * @param stage the stage with which the path is associated * @param buildId (optional) build id to use, defaults to current build id */ def getArtifactShareUrl( String stage, String buildId = env.BUILD_ID) { return sh(returnStdout: true , script: "./other/jenkins/share_artifacts.sh " + "--action getShareUrl " + "--jobName ${env.JOB_NAME} --jobBuild ${buildId} --jobStage ${stage}" ).trim() } /** * Process junit results from the jvm tests */ def processJvmJunitResults() { junit '**/build/test-results /**/ *.xml' } /** * Add the archived apk's as artifacts */ def addArtifacts() { archiveArtifacts artifacts: "Mobile4/build/outputs/apk /**/ *.apk" , fingerprint: true } def notifyUnhealthy() { if (getPipelineMode() != PipelineMode.RELEASE) { return } slackSend(channel: '#team-android-dev' , color: 'danger' , message: "<${currentBuild.absoluteUrl}|${currentBuild.fullDisplayName}>" + " :jenkins_angry: is unhealthy." ) } def notifyFixed() { if (getPipelineMode() != PipelineMode.RELEASE) { return } slackSend(channel: '#team-android-dev' , color: 'good' , message: "<${currentBuild.absoluteUrl}|${currentBuild.fullDisplayName}>" + ":jenkins: is healthy again." ) }

          Devin Nusbaum added a comment -

          ian_ookla Thanks for the Jenkinsfile! My best guess is that the unfixed part of the issue has to do with having when expressions on more than one stage, or maybe beforeAgent: true, but I'm not sure. Do you have a screenshot of what the graph looks like for your Pipeline when you have the issue?

          Given the comments, I'll go ahead and reopen the issue. I am not sure whether the fixes in Blue Ocean 1.19.0 fixed a significant amount of the ways this problem could happen, and we are just left with some special cases, or if there are still a lot of outstanding issues.

          Devin Nusbaum added a comment - ian_ookla Thanks for the Jenkinsfile! My best guess is that the unfixed part of the issue has to do with having when expressions on more than one stage, or maybe beforeAgent: true , but I'm not sure. Do you have a screenshot of what the graph looks like for your Pipeline when you have the issue? Given the comments, I'll go ahead and reopen the issue. I am not sure whether the fixes in Blue Ocean 1.19.0 fixed a significant amount of the ways this problem could happen, and we are just left with some special cases, or if there are still a lot of outstanding issues.

          dnusbaum A screenshot of the graph for the pipeline when in this state looks just like the existing two screenshots.

          Brian J Murrell added a comment - dnusbaum A screenshot of the graph for the pipeline when in this state looks just like the existing two screenshots.

          Devin Nusbaum added a comment -

          brianjmurrell Yes, I mean for ian_ookla's Pipeline in particular, since we have the associated Jenkinsfile we could use the graph to see what other stages are being shown or not and in what state. Even better would be to have a screenshot of the graph from build with the issue and a screenshot of the graph for the previous build, since the problem relates to how those graphs are being combined.

          Devin Nusbaum added a comment - brianjmurrell Yes, I mean for ian_ookla 's Pipeline in particular, since we have the associated Jenkinsfile we could use the graph to see what other stages are being shown or not and in what state. Even better would be to have a screenshot of the graph from build with the issue and a screenshot of the graph for the previous build, since the problem relates to how those graphs are being combined.

          dnusbaum

           

          I opened a new PR and let it build. When it completed successfully, I rebuilt it. Same thing in both cases, the "Buddy: build apks" stage shows a queued.

           

          I took the screenshots and video after verifying that the stage was actually building but looking at the node view.

           

          jenkins_build1.png

          jenkins_build1.mov

          jenkins_buld2.png

          jenkins_buld2.mov

           

          Ian Wallace-Hoyt added a comment - dnusbaum   I opened a new PR and let it build. When it completed successfully, I rebuilt it. Same thing in both cases, the "Buddy: build apks" stage shows a queued.   I took the screenshots and video after verifying that the stage was actually building but looking at the node view.   jenkins_build1.png jenkins_build1.mov jenkins_buld2.png jenkins_buld2.mov  

          Ranky Lau added a comment -

          Is there anyone working at this issue right now?

          Ranky Lau added a comment - Is there anyone working at this issue right now?

          boris ivan added a comment -

          this one really hurts, hoping it can be fixed since it really is a bug and not an enhancement.

          boris ivan added a comment - this one really hurts, hoping it can be fixed since it really is a bug and not an enhancement.

          Rasmus Voss added a comment - - edited

          Hi,

          I have the same issue with a stage like this.

          But behind the scenes the stage is running without showing any logs in blueocean.

          So eventually the pipeline completes

          stage ('HTML Clients') {
           parallel  {
              stage('Designer - msch21') {
                  when { environment name: 'BRANCH_TYPE', value: 'branch' }
                  steps {
                      sh (script: '''
                          command
                      ''')
                  }
              }
              stage('Designer - selfhost') {
                  when { environment name: 'BRANCH_TYPE', value: 'release' }
                  steps {
                      sh (script: '''
                          command
                      ''')
                  }
              }
              stage('System-On - msch21') {
                  when { environment name: 'BRANCH_TYPE', value: 'branch' }
                  steps {
                      sh (script: '''
                          command
                      ''')
                  }
              }
              stage('System-On - selfhost') {
                  when { environment name: 'BRANCH_TYPE', value: 'release' }
                  steps {
                      sh (script: '''
                          command
                      ''')
                  }
              }
          }}
          

           

           

          Rasmus Voss added a comment - - edited Hi, I have the same issue with a stage like this. But behind the scenes the stage is running without showing any logs in blueocean. So eventually the pipeline completes stage ( 'HTML Clients' ) { parallel { stage( 'Designer - msch21' ) { when { environment name: 'BRANCH_TYPE' , value: 'branch' } steps { sh (script: ''' command ''') } } stage( 'Designer - selfhost' ) { when { environment name: 'BRANCH_TYPE' , value: 'release' } steps { sh (script: ''' command ''') } } stage( ' System -On - msch21' ) { when { environment name: 'BRANCH_TYPE' , value: 'branch' } steps { sh (script: ''' command ''') } } stage( ' System -On - selfhost' ) { when { environment name: 'BRANCH_TYPE' , value: 'release' } steps { sh (script: ''' command ''') } } }}    

          Donald Morton added a comment -

          I'm seeing the same issue on Jenkins 2.235.3 and Blue Ocean 1.23.2.

          Donald Morton added a comment - I'm seeing the same issue on Jenkins 2.235.3 and Blue Ocean 1.23.2.

          I am seeing the same issue as well, with Blue Ocean 1.23.2 and Jenkins 2.235.1

          dinesh Gopalakrishnan added a comment - I am seeing the same issue as well, with Blue Ocean 1.23.2 and Jenkins 2.235.1

          Ernest Suryś added a comment - - edited

          Same here, BO: 1.24.2, Jenkins: 2.492.2

          pipeline {
            agent any
            parameters {
              choice(name: 'BuildType', choices: ['Release', 'Develop'], description: ".")
              text(name: 'Comments', defaultValue: '', description: 'Additional information about this build.')
            }
            stages {
              stage('Check Unity version') {
                steps {
                  sh '''
                    command
                  '''
                }
              }
              stage('Build') {
                environment {
                  PROJECT_NAME = sh(script:'basename $(git config --get remote.origin.url) .git', returnStdout: true).trim()
                  DATE = sh(script:'date +%Y%m%d', returnStdout: true).trim()
                  BUILD_DIR = ""
                }
                parallel {
                  stage('Release') {
                    when { expression { params.BuildType == 'Release' } }
                    steps {
                      sh '''
                        command
                      '''
                    }
                  }
                  stage('Develop') {
                    when { expression { params.BuildType == 'Develop' } }
                    steps {
                      sh '''
                        command
                      '''
                    }
                  }
                }
              }
            }
          }
          

          Ernest Suryś added a comment - - edited Same here, BO: 1.24.2, Jenkins: 2.492.2 pipeline { agent any parameters { choice(name: 'BuildType' , choices: [ 'Release' , 'Develop' ], description: "." ) text(name: 'Comments' , defaultValue: '', description: ' Additional information about this build.') } stages { stage( 'Check Unity version' ) { steps { sh ''' command ''' } } stage( 'Build' ) { environment { PROJECT_NAME = sh(script: 'basename $(git config --get remote.origin.url) .git' , returnStdout: true ).trim() DATE = sh(script: 'date +%Y%m%d' , returnStdout: true ).trim() BUILD_DIR = "" } parallel { stage( 'Release' ) { when { expression { params.BuildType == 'Release' } } steps { sh ''' command ''' } } stage( 'Develop' ) { when { expression { params.BuildType == 'Develop' } } steps { sh ''' command ''' } } } } } }

          Did some experimenting. Pretty sure this happens with parallel stages and at least one having a when expression. Removing the whens or linearizing restores real-time logging.

          Dharma Indurthy added a comment - Did some experimenting. Pretty sure this happens with parallel stages and at least one having a when expression. Removing the whens or linearizing restores real-time logging.

          boris ivan added a comment -

          Since this is a bug and not an enhancement, will it be possible to get an update on this? Affecting so many people.

          boris ivan added a comment - Since this is a bug and not an enhancement, will it be possible to get an update on this? Affecting so many people.

          DP added a comment - - edited

          In terms of the "Waiting for run to start" message in a parallel block, the issue is still reproducible with Blue Ocean 1.24.3.

          As a workaround, I've found that adding a new dummy stage at the end of the parallel block will allow the logs of the other stages in the block to be viewed. The name of this extra stage doesn't matter, but it has to be defined as the last stage in the parallel block.

          For more details, see my comment on JENKINS-48879.

           

          I've also seen the "Waiting for run to start" message in single stages (without a parallel block), usually when the stages in the pipeline have changed (eg. after modifying the Jenkinsfile to add a new stage). In this case, I've found that the details of all stages will be shown when the pipeline completes; subsequent pipeline runs will then show the stages and their logs correctly. I'm not aware of a workaround for this case.

          DP added a comment - - edited In terms of the "Waiting for run to start" message in a parallel block, the issue is still reproducible with Blue Ocean 1.24.3. As a workaround, I've found that adding a new dummy stage at the end of the parallel block will allow the logs of the other stages in the block to be viewed. The name of this extra stage doesn't matter, but it has to be defined as the last stage in the parallel block. For more details, see my comment on JENKINS-48879 .   I've also seen the "Waiting for run to start" message in single stages (without a parallel block), usually when the stages in the pipeline have changed (eg. after modifying the Jenkinsfile to add a new stage). In this case, I've found that the details of all stages will be shown when the pipeline completes; subsequent pipeline runs will then show the stages and their logs correctly. I'm not aware of a workaround for this case.

          Hi, i'm seeing the same issue, in console log step is executing and in blueocean i see "Waiting for run to start" message, 2nd step in a parallel block
          Jenkins: 2.263.2
          Blueocean:  1.24.0 

          Ruslan Yemelianov added a comment - Hi, i'm seeing the same issue, in console log step is executing and in blueocean i see "Waiting for run to start" message, 2nd step in a parallel block Jenkins: 2.263.2 Blueocean:  1.24.0 

          Nick Devenish added a comment - - edited

          I'm seeing this now still, Blue Ocean 1.24.4, Jenkins 2.277.1 (docker jenkinsci/blueocean ). The previous run failed on this stage, but otherwise the flow structure/when conditions haven't changed since the last run - but I did remove the agent specifier from two incorrectly applied stages:

            parallel {
                stage("Linux") {
                    agent { label 'linux' }
                    stages {
                        stage("Build") {
          -                 agent any
                            ...
          
                        }
                        stage("Test") {
          -                 agent none
                            ...
                        }
                    }
                }
            }
          

          It actually started showing the output but then on a refresh disappeared back down to "Waiting for Run to Start". This build then went on to fail, but it also didn't work on a subsequent run so maybe it's not related to the agent changes. (Edit: To say that it's all obviously showing in the classic "console log" fine)

           

          Edit 2: Again to say that today with my test builds it persists - but it isn't just blank; before starting the job (waiting for executor) the "current stage" indicator is on the wrong step. Then, when it starts, it has the first step fine "checking out from source control". Clicking on a different node then back to the actual executing one replaces even that one step with "waiting for run to start"

          Nick Devenish added a comment - - edited I'm seeing this now still, Blue Ocean 1.24.4, Jenkins 2.277.1 (docker jenkinsci/blueocean ). The previous run failed on this stage, but otherwise the flow structure/when conditions haven't changed since the last run - but I did remove the agent specifier from two incorrectly applied stages: parallel { stage("Linux") { agent { label 'linux' } stages { stage("Build") { - agent any ... } stage("Test") { - agent none ... } } } } It actually started showing the output but then on a refresh disappeared back down to "Waiting for Run to Start". This build then went on to fail, but it also didn't work on a subsequent run so maybe it's not related to the agent changes. (Edit: To say that it's all obviously showing in the classic "console log" fine)   Edit 2: Again to say that today with my test builds it persists - but it isn't just blank; before starting the job (waiting for executor) the "current stage" indicator is on the wrong step. Then, when it starts, it has the first step fine "checking out from source control". Clicking on a different node then back to the actual executing one replaces even that one step with "waiting for run to start"

          Pay Bas added a comment -

          It's been a couple of years now, but sad to say that this bug/issue is still very much alive and kicking unfortunately.

          Since our pipelines are heavily parallelized, it makes Blue Ocean pretty much useless until the build is finished.

          Pay Bas added a comment - It's been a couple of years now, but sad to say that this bug/issue is still very much alive and kicking unfortunately. Since our pipelines are heavily parallelized, it makes Blue Ocean pretty much useless until the build is finished.

          Jason Wright added a comment -

          I'm having the same issue with Blue Ocean 1.27.9

          Jason Wright added a comment - I'm having the same issue with Blue Ocean 1.27.9

            Unassigned Unassigned
            pracucci Marco Pracucci
            Votes:
            44 Vote for this issue
            Watchers:
            61 Start watching this issue

              Created:
              Updated: