Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-46569

Job execution time includes waiting time

    XMLWordPrintable

    Details

    • Type: Bug
    • Status: Open (View Workflow)
    • Priority: Minor
    • Resolution: Unresolved
    • Component/s: pipeline
    • Labels:
      None
    • Environment:
    • Similar Issues:

      Description

      The issue occours if you have a "Multibranch Pipeline" job that takes some time such as:

      pipeline {
          agent any
          
          stages {
               stage('only'){
                  steps {
                      checkout scm
                      sh 'sleep 300'
                  }
              }
          }
      }

          
      This gets automatically detected by the pipeline and executed. This can also be kicked off manually by going into the job and clicking the run button beside the branch (e.g. 'master'). If this button is pushed twice in succession (or two branches are committed at the same time) two instances of the job will run. If there is a single build executor that these jobs can run on (I have only seen this when the executor is separate from the "master" node), the first will start running. The second will do some pipeline action to identify what needs to be run, but then wait in the queue for the first job to complete. Once the first job has completed the second job will run to completion. For the above job, the first job will show as taking the appropriate five minutes. However, the second job will show as taking ten minutes....it is including the time it was waiting for an executor! If this is done with many jobs, the time will include all the time that each job had to wait in the queue. When looking at the historical builds, it will look like it took ten minutes to execute even though it was five of waiting and five of executing. This will also affect the projected build time for the next job run.

      This also shows "10 minutes building on an executor" if the metrics plugin is installed.

      The solution to this is to not include the time spent waiting for an executor in the build time recorded. I believe the matrix jobs had a similar issue that was fixed in Issue #8112.

        Attachments

          Activity

          Hide
          one_random_dev Random Dev added a comment -

          is there still no workaround for this?
          I agree this should be higher prio.
          When using multibranch pipelines it risks rendering the timeout option useless

          Show
          one_random_dev Random Dev added a comment - is there still no workaround for this? I agree this should be higher prio. When using multibranch pipelines it risks rendering the timeout option useless
          Hide
          david_resnick David Resnick added a comment -

          I agree, this is definitely not minor. Big problem for us, a shame that it is not higher priority.

          Show
          david_resnick David Resnick added a comment - I agree, this is definitely not minor. Big problem for us, a shame that it is not higher priority.
          Hide
          aarondmarasco_vsi Aaron D. Marasco added a comment -

          Josh Wand the problem with that workaround is that each stage/node needs its own timeout. :-/

          Show
          aarondmarasco_vsi Aaron D. Marasco added a comment - Josh Wand the problem with that workaround is that each stage/node needs its own timeout . :-/
          Hide
          joshwand Josh Wand added a comment -

          I partially get around this by putting the timeout block inside the node block (using procedural pipeline, anyways):

          stage('stage 1') {
            node {
              timeout(30) {
                // do stuff
              }
            }
          }

          But the build times are still wrong, even for a single stage–the total time reported still includes the time spent waiting for an executor. 

          Show
          joshwand Josh Wand added a comment - I partially get around this by putting the timeout block inside the node block (using procedural pipeline, anyways): stage( 'stage 1' ) { node { timeout(30) { // do stuff } } } But the build times are still wrong, even for a single stage–the total time reported still includes the time spent waiting for an executor. 
          Hide
          marty Marty S added a comment -

          The priority of this should be higher than "Minor", since the additional waiting time will count towards possible timeouts.

          So if I define a timeout of one hour and the job waits for 30 minutes, it will be cancelled after 30 minutes of "real" execution time.

          Show
          marty Marty S added a comment - The priority of this should be higher than "Minor", since the additional waiting time will count towards possible timeouts. So if I define a timeout of one hour and the job waits for 30 minutes, it will be cancelled after 30 minutes of "real" execution time.
          Hide
          ddaehler Daniel Daehler added a comment - - edited

          This problem also occurs with pipeline definitions that specify an agent on the pipeline itself. Because reasons we cannot execute more than one build per agent, therefore each agent capable of executing the label in question is restricted to 1 executor and concurrent builds are disabled. If our devs commit to different branches of the project more or less simultaneously, jobs are started for each branch but some of the jobs just idle around until an executor is available. The time spent waiting for an initial executor should not be added to the execution time, or the job should be pending until an executor is available.

          Any known workarounds for this issue?

          Pipeline

          pipeline {
              agent {
                  node {
                      label 'client-env'
                  }
              }
              options {
                  disableConcurrentBuilds()
              }
              triggers {
                  pollSCM 'H/3 * * * *'
              }
              stages {
                  stage('Build') {
                      steps {
                          echo 'Long running build'
                      }
                  }
              }
          }

           

          Show
          ddaehler Daniel Daehler added a comment - - edited This problem also occurs with pipeline definitions that specify an agent on the pipeline itself. Because reasons we cannot execute more than one build per agent, therefore each agent capable of executing the label in question is restricted to 1 executor and concurrent builds are disabled. If our devs commit to different branches of the project more or less simultaneously, jobs are started for each branch but some of the jobs just idle around until an executor is available. The time spent waiting for an initial executor should not be added to the execution time, or the job should be pending until an executor is available. Any known workarounds for this issue? Pipeline pipeline { agent { node { label 'client-env' } } options { disableConcurrentBuilds() } triggers { pollSCM 'H/3 * * * *' } stages { stage( 'Build' ) { steps { echo ' Long running build' } } } }  
          Hide
          jkimmel Joe Kimmel added a comment -

          +1  – please!  I don't understand how anyone gets past this:   I have a large codebase, and I want to queue up many orthogonal unit tests to be run in parallel on a limited set of workers,  but the "time limit" then needs to be the total time that the jobs might spend in the queue!  What if there's a different run queued in front of them?   I just want a timelimit that will kill hung jobs – e.g. I should be able to say "no single unit test suite will ever run for more than 5 minutes including repo checkout, build-time, and running"  and then kill the job if there's an infinite loop, or an infinite hang on some network comm, where "infinite" equals "more than 5 minutes since the job started". 

           

          however with the current implementation I have to set it the timeout to something large like 30 minutes or 1 hour, and there will always be some pathological case where many jobs are queued and so some fail due to timeout!   

          Show
          jkimmel Joe Kimmel added a comment - +1  – please!  I don't understand how anyone gets past this:   I have a large codebase, and I want to queue up many orthogonal unit tests to be run in parallel on a limited set of workers,  but the "time limit" then needs to be the total time that the jobs might spend in the queue!  What if there's a different run queued in front of them?   I just want a timelimit that will kill hung jobs – e.g. I should be able to say "no single unit test suite will ever run for more than 5 minutes including repo checkout, build-time, and running"  and then kill the job if there's an infinite loop, or an infinite hang on some network comm, where "infinite" equals "more than 5 minutes since the job started".    however with the current implementation I have to set it the timeout to something large like 30 minutes or 1 hour, and there will always be some pathological case where many jobs are queued and so some fail due to timeout!   
          Hide
          pleemann pleemann added a comment -

          This problem also appears when there are more jobs running than executors available. After each stage, the job is added to the build queue again. Including the waiting time makes the recorded time quite useless.

          Show
          pleemann pleemann added a comment - This problem also appears when there are more jobs running than executors available. After each stage, the job is added to the build queue again. Including the waiting time makes the recorded time quite useless.

            People

            Assignee:
            Unassigned Unassigned
            Reporter:
            teeks99 Thomas Kent
            Votes:
            41 Vote for this issue
            Watchers:
            45 Start watching this issue

              Dates

              Created:
              Updated: