I doubt this one is reproductible. 

      Going to yourjiraurlhere/computer/api/json?pretty=true&tree=computer[oneOffExecutors[likelyStuck,currentExecutable[result,url]]]{0}

      gives you the jobs currently running.

      One of my job is marked as "likely stuck", but his state is result is "SUCCESS" (and has been "SUCCESS" since 2h30, making me doubt about the veracity of the "likely stuck". 

      The job isn't running either. It's completed, but is still somehow showing as "likely stuck". 

          [JENKINS-45571] "likely stuck" job is not actually stuck.

          I'm seeing this issue as well. Lots of executors listed in /computer/api/json?pretty=true&tree=computer[oneOffExecutors[likelyStuck,currentExecutable[building,result,url]]]{0} in the following state:

          {
           "currentExecutable" : {
           "_class" : "org.jenkinsci.plugins.workflow.job.WorkflowRun",
           "building" : false,
           "result" : "SUCCESS",
           "url" : url
           },
           "likelyStuck" : true
          }

          However, in my case it doesn't seem to be related to resuming pipelines at Jenkins startup. I have written a script to cleanup such executors. Haven't restarted Jenkins since the script has run, and still I see the new executors like those.

          Anna Tikhonova added a comment - I'm seeing this issue as well. Lots of executors listed in /computer/api/json?pretty=true&tree=computer[oneOffExecutors[likelyStuck,currentExecutable [building,result,url] ]]{0} in the following state: { "currentExecutable" : { "_class" : "org.jenkinsci.plugins.workflow.job.WorkflowRun" , "building" : false , "result" : "SUCCESS" , "url" : url }, "likelyStuck" : true } However, in my case it doesn't seem to be related to resuming pipelines at Jenkins startup. I have written a script to cleanup such executors. Haven't restarted Jenkins since the script has run, and still I see the new executors like those.

          Why this bug could be of more interest is that it intervenes Throttle Concurrent Build plugin scheduling. TCP prevents scheduling more builds because it considers those hanging executors. Once there are more hanging executors than maximum total concurrent builds configured for a job (N), the job is forever stuck ("pending—Already running N builds across all nodes").

          Anna Tikhonova added a comment - Why this bug could be of more interest is that it intervenes Throttle Concurrent Build plugin scheduling. TCP prevents scheduling more builds because it considers those hanging executors. Once there are more hanging executors than maximum total concurrent builds configured for a job (N), the job is forever stuck ("pending—Already running N builds across all nodes").

          Devin Nusbaum added a comment -

          atikhonova The fact that you are seeing the issue without restarting Jenkins is very interesting. Do you have a pipeline which is able to reproduce the problem consistently?

          Devin Nusbaum added a comment - atikhonova The fact that you are seeing the issue without restarting Jenkins is very interesting. Do you have a pipeline which is able to reproduce the problem consistently?

          Sam Van Oort added a comment -

          Note from investigation: so, separate from JENKINS-50199 there appears to be a different but related failure mode:

          1. The symptoms described by Anna will be reproduced if the build completes (WorkflowRun#finish is called), but the copyLogsTask never gets invoked or fails, since that is what actually removes the FlyWeightTask and kills the OneOffExecutor. See the CopyLogsTask logic - https://github.com/jenkinsci/workflow-job-plugin/blob/master/src/main/java/org/jenkinsci/plugins/workflow/job/WorkflowRun.java#L403
          2. If the AsynchronousExecution is never completed, we'll see a "likelyStuck" executor for each OneOffExecutor

          Sam Van Oort added a comment - Note from investigation: so, separate from JENKINS-50199 there appears to be a different but related failure mode: 1. The symptoms described by Anna will be reproduced if the build completes (WorkflowRun#finish is called), but the copyLogsTask never gets invoked or fails, since that is what actually removes the FlyWeightTask and kills the OneOffExecutor. See the CopyLogsTask logic - https://github.com/jenkinsci/workflow-job-plugin/blob/master/src/main/java/org/jenkinsci/plugins/workflow/job/WorkflowRun.java#L403 2. If the AsynchronousExecution is never completed, we'll see a "likelyStuck" executor for each OneOffExecutor

          Anna Tikhonova added a comment - - edited

          dnusbaum unfortunately, I don't. I've got a few 1000+ LOC pipelines running continuously. I do not know how to tell which one leaves executors and when.

          Pipeline build that has such "likelyStuck" executor looks completed on its build page (no progress bars, build status is set). But I still can see a matching OneOffExecutor on master:

                "_class" : "hudson.model.Hudson$MasterComputer",
                "oneOffExecutors" : [
                  {
                    "currentExecutable" : {
                      "_class" : "org.jenkinsci.plugins.workflow.job.WorkflowRun",
                      "building" : false,    // always false for these lost executors
                      "result" : "SUCCESS",    // always set to some valid build status != null
                      "url" : "JENKINS/job/PIPELINE/BUILD_NUMBER/"
                    },
                    "likelyStuck" : false    // can be true or false
                  }, ...
          

          Anna Tikhonova added a comment - - edited dnusbaum unfortunately, I don't. I've got a few 1000+ LOC pipelines running continuously. I do not know how to tell which one leaves executors and when. Pipeline build that has such "likelyStuck" executor looks completed on its build page (no progress bars, build status is set). But I still can see a matching OneOffExecutor on master: "_class" : "hudson.model.Hudson$MasterComputer" , "oneOffExecutors" : [ { "currentExecutable" : { "_class" : "org.jenkinsci.plugins.workflow.job.WorkflowRun" , "building" : false , // always false for these lost executors "result" : "SUCCESS" , // always set to some valid build status != null "url" : "JENKINS/job/PIPELINE/BUILD_NUMBER/" }, "likelyStuck" : false // can be true or false }, ...

          Devin Nusbaum added a comment - - edited

          atikhonova Are you able upload the build directory of the build matching the stuck executor? Specifically, it would be helpful to see build.xml and the xml file(s) in the workflow directory. EDIT: I see now that you can't easily tell which are stuck and which are good. If you can find an executor with likelyStuck: true, and whose build looks like it has otherwise completed or is suck, that would be a great candidate.

          Another note: JENKINS-38381 will change the control flow here significantly.

          Devin Nusbaum added a comment - - edited atikhonova  Are you able upload the build directory of the build matching the stuck executor? Specifically, it would be helpful to see build.xml and the xml file(s) in the workflow directory. EDIT: I see now that you can't easily tell which are stuck and which are good. If you can find an executor with likelyStuck: true , and whose build looks like it has otherwise completed or is suck, that would be a great candidate. Another note:  JENKINS-38381 will change the control flow here significantly.

          Jesse Glick added a comment -

          gives you the jobs currently running

          This is not really an appropriate API query to use for that question. If your interest is limited to all Pipeline builds, FlowExecutionList is likely to be more useful. If you are looking at builds of a particular job (Pipeline or not), I think that information is available from the endpoint for that job.

          Jesse Glick added a comment - gives you the jobs currently running This is not really an appropriate API query to use for that question. If your interest is limited to all Pipeline builds, FlowExecutionList is likely to be more useful. If you are looking at builds of a particular job (Pipeline or not), I think that information is available from the endpoint for that job.

          Jesse Glick added a comment -

          TCP prevents scheduling more builds because it considers those hanging executors.

          Offhand this sounds like a flaw in TCB. This PR introduced that behavior, purportedly to support the build-flow plugin (a conceptual predecessor of Pipeline née Workflow). If TCB intends to throttle builds per se (rather than work done by those builds—typically node blocks for Pipeline), then there are more direct ways of doing this than counting Executor slots.

          Jesse Glick added a comment - TCP prevents scheduling more builds because it considers those hanging executors. Offhand this sounds like a flaw in TCB. This PR introduced that behavior, purportedly to support the build-flow plugin (a conceptual predecessor of Pipeline née Workflow). If TCB intends to throttle builds per se (rather than work done by those builds—typically node blocks for Pipeline), then there are more direct ways of doing this than counting Executor slots.

          Basil Crow added a comment -

          Offhand this sounds like a flaw in TCB.

          I am attempting to fix this flaw in jenkinsci/throttle-concurrent-builds-plugin#57.

          Basil Crow added a comment - Offhand this sounds like a flaw in TCB. I am attempting to fix this flaw in jenkinsci/throttle-concurrent-builds-plugin#57 .

          Basil Crow added a comment -

          I am attempting to fix this flaw in jenkinsci/throttle-concurrent-builds-plugin#57.

          This PR has been merged, and the master branch of Throttle Concurrent Builds now uses FlowExecutionList to calculate the number of running Pipeline jobs, which should work around the issue described in this bug. I have yet to release a new version of Throttle Concurrent Builds with this fix, but there is an incremental build available here. atikhonova, are you interested in testing this incremental build before I do an official release?

          Basil Crow added a comment - I am attempting to fix this flaw in jenkinsci/throttle-concurrent-builds-plugin#57 . This PR has been merged, and the master branch of Throttle Concurrent Builds now uses FlowExecutionList to calculate the number of running Pipeline jobs, which should work around the issue described in this bug. I have yet to release a new version of Throttle Concurrent Builds with this fix, but there is an incremental build available here . atikhonova , are you interested in testing this incremental build before I do an official release?

            Unassigned Unassigned
            zeal_iskander Stark Gabriel
            Votes:
            0 Vote for this issue
            Watchers:
            10 Start watching this issue

              Created:
              Updated: