Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-64606

Throttled jobs get throttled when jobs running, but all throttled jobs start together once running job is finished

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Major Major
    • None
    • Ubuntu 18.04.5
      openjdk "1.8.0_275"
      Jenkins 2.269
      throttle-concurrent-builds 2.0.3

      Hello,

      I have a bunch of jobs that I need to throttle so that only one of them is running at any given time. On our old Jenkins instance that we're migrating away from, we had been using throttle-concurrent-builds combined with it's categories feature to achieve this. We've attempted to replicate this, however, the difference now is that we're using job-dsl to generate the jobs, and using Jenkinsfiles to have our pipelines in code.

      On our new Jenkins instance however, this doesn't seem to be fully working. We are observing the following behaviour:

      Imagine there are 3 jobs, Jobs A through C

      We have Job A that is currently running. While Job A is running, any other jobs (that is B and C), get queued as expected.  Once Job A finishes, jobs B and C get scheduled straight away so that now both jobs are running.

      This is obviously not desired behaviour, and we only need to run a single job at a time.

      I've had a look through the config.xml that gets generated on our new instance, and compared it with our old instance and below is the diff:

      diff --git 1/old.xml 2/new.xml
      index 9b0bb66..747ad8f 100644
      --- 1/old.xml
      +++ 2/new.xml
      @@ -1,15 +1,15 @@
       <?xml version="1.0"?>
       <properties>
      -  <hudson.plugins.throttleconcurrents.ThrottleJobProperty plugin="throttle-concurrents@2.0.1">
      +  <org.jenkinsci.plugins.workflow.job.properties.DisableConcurrentBuildsJobProperty/>
      +  <hudson.plugins.throttleconcurrents.ThrottleJobProperty plugin="throttle-concurrents@2.0.3">
           <maxConcurrentPerNode>1</maxConcurrentPerNode>
           <maxConcurrentTotal>1</maxConcurrentTotal>
      -    <categories class="java.util.concurrent.CopyOnWriteArrayList">
      -      <string>my-category</string>
      +    <categories>
      +      <string>my-pipeline-category</string>
           </categories>
           <throttleEnabled>true</throttleEnabled>
           <throttleOption>category</throttleOption>
           <limitOneJobWithMatchingParams>false</limitOneJobWithMatchingParams>
      -    <paramsToUseForLimit>PIPELINE_MODE</paramsToUseForLimit>
           <configVersion>1</configVersion>
         </hudson.plugins.throttleconcurrents.ThrottleJobProperty>
       </properties>

      Couple of notes, the parameter `PIPELINE_MODE` has been removed on our new instance, but from looking at the code for the plugin, it looks to me like `paramsToUseForLimit` attribute is ignored when `limitOneJobWithMatchingParams` is set to false.

      Furthermore, I'm not quite sure where `DisableConcurrentBuildsJobProperty` comes from, and I'm not really sure if this is what could be causing it.

      Our Job DSL (which is pretty much the same for all jobs):

      pipelineJob("myjob") {
          concurrentBuild(false)
      
          properties {
              throttleJobProperty {
                  throttleEnabled(true)
                  throttleOption('category')
                  categories(['my-pipeline-category'])
                  maxConcurrentPerNode(1)
                  maxConcurrentTotal(1)
                  limitOneJobWithMatchingParams(false)
                  paramsToUseForLimit(null)
                  matrixOptions {
                      throttleMatrixBuilds(true)
                      throttleMatrixConfigurations(false)
                  }
              }
          }
          
          definition {
              cpsScm {
                  scm {
                      git {
                          remote {
                              url('https://github.com/xxx/xxx.git')
                              credentials('xxxx')
                          }
                      }
                  }
      
                  scriptPath('Jenkinsfile')
              }
          }
      } 

      This looks to me like a bug, but I'm not entirely sure.

      Steps to reproduce the issue have been added as a comment.

          [JENKINS-64606] Throttled jobs get throttled when jobs running, but all throttled jobs start together once running job is finished

          Hosh created issue -
          Hosh made changes -
          Description Original: Hello,

          I have a bunch of jobs that I need to throttle so that only one of them is running at any given time. On our old Jenkins instance that we're migrating away from, we had been using throttle-concurrent-builds combined with it's categories feature to achieve this. We've attempted to replicate this, however, the difference now is that we're using job-dsl to generate the jobs, and using Jenkinsfiles to have our pipelines in code.

          On our new Jenkins instance however, this doesn't seem to be fully working. We are observing the following behaviour:

          Imagine there are 3 jobs, Jobs A through C

          We have Job A that is currently running. While Job A is running, any other jobs (that is B and C), get queued as expected.  Once Job A finishes, jobs B and C get scheduled straight away so that now both jobs are running.

          This is obviously not desired behaviour, and we only need to run a single job at a time.

          I've had a look through the config.xml that gets generated on our new instance, and compared it with our old instance and below is the diff:
          {code:java}
          diff --git 1/old.xml 2/new.xml
          index 9b0bb66..747ad8f 100644
          --- 1/old.xml
          +++ 2/new.xml
          @@ -1,15 +1,15 @@
           <?xml version="1.0"?>
           <properties>
          - <hudson.plugins.throttleconcurrents.ThrottleJobProperty plugin="throttle-concurrents@2.0.1">
          + <org.jenkinsci.plugins.workflow.job.properties.DisableConcurrentBuildsJobProperty/>
          + <hudson.plugins.throttleconcurrents.ThrottleJobProperty plugin="throttle-concurrents@2.0.3">
               <maxConcurrentPerNode>1</maxConcurrentPerNode>
               <maxConcurrentTotal>1</maxConcurrentTotal>
          - <categories class="java.util.concurrent.CopyOnWriteArrayList">
          - <string>my-category</string>
          + <categories>
          + <string>my-pipeline-category</string>
               </categories>
               <throttleEnabled>true</throttleEnabled>
               <throttleOption>category</throttleOption>
               <limitOneJobWithMatchingParams>false</limitOneJobWithMatchingParams>
          - <paramsToUseForLimit>PIPELINE_MODE</paramsToUseForLimit>
               <configVersion>1</configVersion>
             </hudson.plugins.throttleconcurrents.ThrottleJobProperty>
           </properties> {code}
          Couple of notes, the parameter `PIPELINE_MODE` has been removed on our new instance, but from looking at the code for the plugin, it looks to me like `paramsToUseForLimit` attribute is ignored when `limitOneJobWithMatchingParams` is set to false.

          Furthermore, I'm not quite sure where `DisableConcurrentBuildsJobProperty` comes from, and I'm not really sure if this is what could be causing it.

          This looks to me like a bug, but I'm not entirely sure.
          New: Hello,

          I have a bunch of jobs that I need to throttle so that only one of them is running at any given time. On our old Jenkins instance that we're migrating away from, we had been using throttle-concurrent-builds combined with it's categories feature to achieve this. We've attempted to replicate this, however, the difference now is that we're using job-dsl to generate the jobs, and using Jenkinsfiles to have our pipelines in code.

          On our new Jenkins instance however, this doesn't seem to be fully working. We are observing the following behaviour:

          Imagine there are 3 jobs, Jobs A through C

          We have Job A that is currently running. While Job A is running, any other jobs (that is B and C), get queued as expected.  Once Job A finishes, jobs B and C get scheduled straight away so that now both jobs are running.

          This is obviously not desired behaviour, and we only need to run a single job at a time.

          I've had a look through the config.xml that gets generated on our new instance, and compared it with our old instance and below is the diff:
          {code:java}
          diff --git 1/old.xml 2/new.xml
          index 9b0bb66..747ad8f 100644
          --- 1/old.xml
          +++ 2/new.xml
          @@ -1,15 +1,15 @@
           <?xml version="1.0"?>
           <properties>
          - <hudson.plugins.throttleconcurrents.ThrottleJobProperty plugin="throttle-concurrents@2.0.1">
          + <org.jenkinsci.plugins.workflow.job.properties.DisableConcurrentBuildsJobProperty/>
          + <hudson.plugins.throttleconcurrents.ThrottleJobProperty plugin="throttle-concurrents@2.0.3">
               <maxConcurrentPerNode>1</maxConcurrentPerNode>
               <maxConcurrentTotal>1</maxConcurrentTotal>
          - <categories class="java.util.concurrent.CopyOnWriteArrayList">
          - <string>my-category</string>
          + <categories>
          + <string>my-pipeline-category</string>
               </categories>
               <throttleEnabled>true</throttleEnabled>
               <throttleOption>category</throttleOption>
               <limitOneJobWithMatchingParams>false</limitOneJobWithMatchingParams>
          - <paramsToUseForLimit>PIPELINE_MODE</paramsToUseForLimit>
               <configVersion>1</configVersion>
             </hudson.plugins.throttleconcurrents.ThrottleJobProperty>
           </properties> {code}
          Couple of notes, the parameter `PIPELINE_MODE` has been removed on our new instance, but from looking at the code for the plugin, it looks to me like `paramsToUseForLimit` attribute is ignored when `limitOneJobWithMatchingParams` is set to false.

          Furthermore, I'm not quite sure where `DisableConcurrentBuildsJobProperty` comes from, and I'm not really sure if this is what could be causing it.

          Our Job DSL:
          {code:java}
          pipelineJob("myjob") {
              concurrentBuild(false)

              properties {
                  throttleJobProperty {
                      throttleEnabled(true)
                      throttleOption('category')
                      categories(['my-pipeline-category'])
                      maxConcurrentPerNode(1)
                      maxConcurrentTotal(1)
                      limitOneJobWithMatchingParams(false)
                      paramsToUseForLimit(null)
                      matrixOptions {
                          throttleMatrixBuilds(true)
                          throttleMatrixConfigurations(false)
                      }
                  }
              }
              
              definition {
                  cpsScm {
                      scm {
                          git {
                              remote {
                                  url('https://github.com/xxx/xxx.git&#39;)
                                  credentials('xxxx')
                              }
                          }
                      }

                      scriptPath('Jenkinsfile')
                  }
              }
          } {code}
          This looks to me like a bug, but I'm not entirely sure.
          Hosh made changes -
          Priority Original: Minor [ 4 ] New: Major [ 3 ]
          Hosh made changes -
          Issue Type Original: New Feature [ 2 ] New: Bug [ 1 ]
          Hosh made changes -
          Description Original: Hello,

          I have a bunch of jobs that I need to throttle so that only one of them is running at any given time. On our old Jenkins instance that we're migrating away from, we had been using throttle-concurrent-builds combined with it's categories feature to achieve this. We've attempted to replicate this, however, the difference now is that we're using job-dsl to generate the jobs, and using Jenkinsfiles to have our pipelines in code.

          On our new Jenkins instance however, this doesn't seem to be fully working. We are observing the following behaviour:

          Imagine there are 3 jobs, Jobs A through C

          We have Job A that is currently running. While Job A is running, any other jobs (that is B and C), get queued as expected.  Once Job A finishes, jobs B and C get scheduled straight away so that now both jobs are running.

          This is obviously not desired behaviour, and we only need to run a single job at a time.

          I've had a look through the config.xml that gets generated on our new instance, and compared it with our old instance and below is the diff:
          {code:java}
          diff --git 1/old.xml 2/new.xml
          index 9b0bb66..747ad8f 100644
          --- 1/old.xml
          +++ 2/new.xml
          @@ -1,15 +1,15 @@
           <?xml version="1.0"?>
           <properties>
          - <hudson.plugins.throttleconcurrents.ThrottleJobProperty plugin="throttle-concurrents@2.0.1">
          + <org.jenkinsci.plugins.workflow.job.properties.DisableConcurrentBuildsJobProperty/>
          + <hudson.plugins.throttleconcurrents.ThrottleJobProperty plugin="throttle-concurrents@2.0.3">
               <maxConcurrentPerNode>1</maxConcurrentPerNode>
               <maxConcurrentTotal>1</maxConcurrentTotal>
          - <categories class="java.util.concurrent.CopyOnWriteArrayList">
          - <string>my-category</string>
          + <categories>
          + <string>my-pipeline-category</string>
               </categories>
               <throttleEnabled>true</throttleEnabled>
               <throttleOption>category</throttleOption>
               <limitOneJobWithMatchingParams>false</limitOneJobWithMatchingParams>
          - <paramsToUseForLimit>PIPELINE_MODE</paramsToUseForLimit>
               <configVersion>1</configVersion>
             </hudson.plugins.throttleconcurrents.ThrottleJobProperty>
           </properties> {code}
          Couple of notes, the parameter `PIPELINE_MODE` has been removed on our new instance, but from looking at the code for the plugin, it looks to me like `paramsToUseForLimit` attribute is ignored when `limitOneJobWithMatchingParams` is set to false.

          Furthermore, I'm not quite sure where `DisableConcurrentBuildsJobProperty` comes from, and I'm not really sure if this is what could be causing it.

          Our Job DSL:
          {code:java}
          pipelineJob("myjob") {
              concurrentBuild(false)

              properties {
                  throttleJobProperty {
                      throttleEnabled(true)
                      throttleOption('category')
                      categories(['my-pipeline-category'])
                      maxConcurrentPerNode(1)
                      maxConcurrentTotal(1)
                      limitOneJobWithMatchingParams(false)
                      paramsToUseForLimit(null)
                      matrixOptions {
                          throttleMatrixBuilds(true)
                          throttleMatrixConfigurations(false)
                      }
                  }
              }
              
              definition {
                  cpsScm {
                      scm {
                          git {
                              remote {
                                  url('https://github.com/xxx/xxx.git&#39;)
                                  credentials('xxxx')
                              }
                          }
                      }

                      scriptPath('Jenkinsfile')
                  }
              }
          } {code}
          This looks to me like a bug, but I'm not entirely sure.
          New: Hello,

          I have a bunch of jobs that I need to throttle so that only one of them is running at any given time. On our old Jenkins instance that we're migrating away from, we had been using throttle-concurrent-builds combined with it's categories feature to achieve this. We've attempted to replicate this, however, the difference now is that we're using job-dsl to generate the jobs, and using Jenkinsfiles to have our pipelines in code.

          On our new Jenkins instance however, this doesn't seem to be fully working. We are observing the following behaviour:

          Imagine there are 3 jobs, Jobs A through C

          We have Job A that is currently running. While Job A is running, any other jobs (that is B and C), get queued as expected.  Once Job A finishes, jobs B and C get scheduled straight away so that now both jobs are running.

          This is obviously not desired behaviour, and we only need to run a single job at a time.

          I've had a look through the config.xml that gets generated on our new instance, and compared it with our old instance and below is the diff:
          {code:java}
          diff --git 1/old.xml 2/new.xml
          index 9b0bb66..747ad8f 100644
          --- 1/old.xml
          +++ 2/new.xml
          @@ -1,15 +1,15 @@
           <?xml version="1.0"?>
           <properties>
          - <hudson.plugins.throttleconcurrents.ThrottleJobProperty plugin="throttle-concurrents@2.0.1">
          + <org.jenkinsci.plugins.workflow.job.properties.DisableConcurrentBuildsJobProperty/>
          + <hudson.plugins.throttleconcurrents.ThrottleJobProperty plugin="throttle-concurrents@2.0.3">
               <maxConcurrentPerNode>1</maxConcurrentPerNode>
               <maxConcurrentTotal>1</maxConcurrentTotal>
          - <categories class="java.util.concurrent.CopyOnWriteArrayList">
          - <string>my-category</string>
          + <categories>
          + <string>my-pipeline-category</string>
               </categories>
               <throttleEnabled>true</throttleEnabled>
               <throttleOption>category</throttleOption>
               <limitOneJobWithMatchingParams>false</limitOneJobWithMatchingParams>
          - <paramsToUseForLimit>PIPELINE_MODE</paramsToUseForLimit>
               <configVersion>1</configVersion>
             </hudson.plugins.throttleconcurrents.ThrottleJobProperty>
           </properties> {code}
          Couple of notes, the parameter `PIPELINE_MODE` has been removed on our new instance, but from looking at the code for the plugin, it looks to me like `paramsToUseForLimit` attribute is ignored when `limitOneJobWithMatchingParams` is set to false.

          Furthermore, I'm not quite sure where `DisableConcurrentBuildsJobProperty` comes from, and I'm not really sure if this is what could be causing it.

          Our Job DSL (which is pretty much the same for all jobs):
          {code:java}
          pipelineJob("myjob") {
              concurrentBuild(false)

              properties {
                  throttleJobProperty {
                      throttleEnabled(true)
                      throttleOption('category')
                      categories(['my-pipeline-category'])
                      maxConcurrentPerNode(1)
                      maxConcurrentTotal(1)
                      limitOneJobWithMatchingParams(false)
                      paramsToUseForLimit(null)
                      matrixOptions {
                          throttleMatrixBuilds(true)
                          throttleMatrixConfigurations(false)
                      }
                  }
              }
              
              definition {
                  cpsScm {
                      scm {
                          git {
                              remote {
                                  url('https://github.com/xxx/xxx.git&#39;)
                                  credentials('xxxx')
                              }
                          }
                      }

                      scriptPath('Jenkinsfile')
                  }
              }
          } {code}
          This looks to me like a bug, but I'm not entirely sure.
          Hosh made changes -
          Description Original: Hello,

          I have a bunch of jobs that I need to throttle so that only one of them is running at any given time. On our old Jenkins instance that we're migrating away from, we had been using throttle-concurrent-builds combined with it's categories feature to achieve this. We've attempted to replicate this, however, the difference now is that we're using job-dsl to generate the jobs, and using Jenkinsfiles to have our pipelines in code.

          On our new Jenkins instance however, this doesn't seem to be fully working. We are observing the following behaviour:

          Imagine there are 3 jobs, Jobs A through C

          We have Job A that is currently running. While Job A is running, any other jobs (that is B and C), get queued as expected.  Once Job A finishes, jobs B and C get scheduled straight away so that now both jobs are running.

          This is obviously not desired behaviour, and we only need to run a single job at a time.

          I've had a look through the config.xml that gets generated on our new instance, and compared it with our old instance and below is the diff:
          {code:java}
          diff --git 1/old.xml 2/new.xml
          index 9b0bb66..747ad8f 100644
          --- 1/old.xml
          +++ 2/new.xml
          @@ -1,15 +1,15 @@
           <?xml version="1.0"?>
           <properties>
          - <hudson.plugins.throttleconcurrents.ThrottleJobProperty plugin="throttle-concurrents@2.0.1">
          + <org.jenkinsci.plugins.workflow.job.properties.DisableConcurrentBuildsJobProperty/>
          + <hudson.plugins.throttleconcurrents.ThrottleJobProperty plugin="throttle-concurrents@2.0.3">
               <maxConcurrentPerNode>1</maxConcurrentPerNode>
               <maxConcurrentTotal>1</maxConcurrentTotal>
          - <categories class="java.util.concurrent.CopyOnWriteArrayList">
          - <string>my-category</string>
          + <categories>
          + <string>my-pipeline-category</string>
               </categories>
               <throttleEnabled>true</throttleEnabled>
               <throttleOption>category</throttleOption>
               <limitOneJobWithMatchingParams>false</limitOneJobWithMatchingParams>
          - <paramsToUseForLimit>PIPELINE_MODE</paramsToUseForLimit>
               <configVersion>1</configVersion>
             </hudson.plugins.throttleconcurrents.ThrottleJobProperty>
           </properties> {code}
          Couple of notes, the parameter `PIPELINE_MODE` has been removed on our new instance, but from looking at the code for the plugin, it looks to me like `paramsToUseForLimit` attribute is ignored when `limitOneJobWithMatchingParams` is set to false.

          Furthermore, I'm not quite sure where `DisableConcurrentBuildsJobProperty` comes from, and I'm not really sure if this is what could be causing it.

          Our Job DSL (which is pretty much the same for all jobs):
          {code:java}
          pipelineJob("myjob") {
              concurrentBuild(false)

              properties {
                  throttleJobProperty {
                      throttleEnabled(true)
                      throttleOption('category')
                      categories(['my-pipeline-category'])
                      maxConcurrentPerNode(1)
                      maxConcurrentTotal(1)
                      limitOneJobWithMatchingParams(false)
                      paramsToUseForLimit(null)
                      matrixOptions {
                          throttleMatrixBuilds(true)
                          throttleMatrixConfigurations(false)
                      }
                  }
              }
              
              definition {
                  cpsScm {
                      scm {
                          git {
                              remote {
                                  url('https://github.com/xxx/xxx.git&#39;)
                                  credentials('xxxx')
                              }
                          }
                      }

                      scriptPath('Jenkinsfile')
                  }
              }
          } {code}
          This looks to me like a bug, but I'm not entirely sure.
          New: Hello,

          I have a bunch of jobs that I need to throttle so that only one of them is running at any given time. On our old Jenkins instance that we're migrating away from, we had been using throttle-concurrent-builds combined with it's categories feature to achieve this. We've attempted to replicate this, however, the difference now is that we're using job-dsl to generate the jobs, and using Jenkinsfiles to have our pipelines in code.

          On our new Jenkins instance however, this doesn't seem to be fully working. We are observing the following behaviour:

          Imagine there are 3 jobs, Jobs A through C

          We have Job A that is currently running. While Job A is running, any other jobs (that is B and C), get queued as expected.  Once Job A finishes, jobs B and C get scheduled straight away so that now both jobs are running.

          This is obviously not desired behaviour, and we only need to run a single job at a time.

          I've had a look through the config.xml that gets generated on our new instance, and compared it with our old instance and below is the diff:
          {code:java}
          diff --git 1/old.xml 2/new.xml
          index 9b0bb66..747ad8f 100644
          --- 1/old.xml
          +++ 2/new.xml
          @@ -1,15 +1,15 @@
           <?xml version="1.0"?>
           <properties>
          - <hudson.plugins.throttleconcurrents.ThrottleJobProperty plugin="throttle-concurrents@2.0.1">
          + <org.jenkinsci.plugins.workflow.job.properties.DisableConcurrentBuildsJobProperty/>
          + <hudson.plugins.throttleconcurrents.ThrottleJobProperty plugin="throttle-concurrents@2.0.3">
               <maxConcurrentPerNode>1</maxConcurrentPerNode>
               <maxConcurrentTotal>1</maxConcurrentTotal>
          - <categories class="java.util.concurrent.CopyOnWriteArrayList">
          - <string>my-category</string>
          + <categories>
          + <string>my-pipeline-category</string>
               </categories>
               <throttleEnabled>true</throttleEnabled>
               <throttleOption>category</throttleOption>
               <limitOneJobWithMatchingParams>false</limitOneJobWithMatchingParams>
          - <paramsToUseForLimit>PIPELINE_MODE</paramsToUseForLimit>
               <configVersion>1</configVersion>
             </hudson.plugins.throttleconcurrents.ThrottleJobProperty>
           </properties> {code}
          Couple of notes, the parameter `PIPELINE_MODE` has been removed on our new instance, but from looking at the code for the plugin, it looks to me like `paramsToUseForLimit` attribute is ignored when `limitOneJobWithMatchingParams` is set to false.

          Furthermore, I'm not quite sure where `DisableConcurrentBuildsJobProperty` comes from, and I'm not really sure if this is what could be causing it.

          Our Job DSL (which is pretty much the same for all jobs):
          {code:java}
          pipelineJob("myjob") {
              concurrentBuild(false)

              properties {
                  throttleJobProperty {
                      throttleEnabled(true)
                      throttleOption('category')
                      categories(['my-pipeline-category'])
                      maxConcurrentPerNode(1)
                      maxConcurrentTotal(1)
                      limitOneJobWithMatchingParams(false)
                      paramsToUseForLimit(null)
                      matrixOptions {
                          throttleMatrixBuilds(true)
                          throttleMatrixConfigurations(false)
                      }
                  }
              }
              
              definition {
                  cpsScm {
                      scm {
                          git {
                              remote {
                                  url('https://github.com/xxx/xxx.git&#39;)
                                  credentials('xxxx')
                              }
                          }
                      }

                      scriptPath('Jenkinsfile')
                  }
              }
          } {code}
           

          This looks to me like a bug, but I'm not entirely sure.

          Hosh added a comment -

          So we've identified this to be some sort of race condition that affects pipelines only. We've taken out as much as we can out of the equation. So we setup an almost empty Jenkins environment with docker with just concurrent builds set up. These are the steps to reproduce:

          1. Set up a single throttle category (max 1 per node, and max 1 across nodes)
          2. Manually create three pipeline jobs called JobA, JobB, JobC with the below pipeline code, and throttle concurrent jobs enabled to with the created category selected.
          3. Run JobA, and wait for it to start (wait until the sleep step kicks in)
          4. Add all the created jobs (JobA, JobB, and JobC) onto the queue in quick succession (from the list view just click schedule build icon as fast as you can).
          5. At this point you should have JobA running (on the sleeping step), and a second run of JobA on the queue in addition to JobB and JobC waiting on the queue.
          6. Observe that, once running JobA finished, JobA, JobB and JobC started as soon as the first run of JobA finished.

          The pipeline used:

          pipeline {
              agent any
              stages {
                  stage('Hello') {
                      steps {
                          echo 'Hello World'
                      }
                  }
                  stage('Wait') {
                      steps {
                          sh 'sleep 20'
                      }
                  }
                  stage('Goodbye') {
                      steps {
                          echo 'Goodbye World'
                      }
                  }
              }
          }

          Of course this race condition essentially renders this plugin completely useless.

          We're currently planning to use lockable-resources as an alternative until this issue gets fixed, but lockable resources isn't ideal as the jobs at this point would take up an executor while waiting for the resources to be unlocked.

          Hosh added a comment - So we've identified this to be some sort of race condition that affects pipelines only. We've taken out as much as we can out of the equation. So we setup an almost empty Jenkins environment with docker with just concurrent builds set up. These are the steps to reproduce: Set up a single throttle category (max 1 per node, and max 1 across nodes) Manually create three pipeline jobs called JobA, JobB, JobC with the below pipeline code, and throttle concurrent jobs enabled to with the created category selected. Run JobA, and wait for it to start (wait until the sleep step kicks in) Add all the created jobs (JobA, JobB, and JobC) onto the queue in quick succession (from the list view just click schedule build icon as fast as you can). At this point you should have JobA running (on the sleeping step), and a second run of JobA on the queue in addition to JobB and JobC waiting on the queue. Observe that, once running JobA finished, JobA, JobB and JobC started as soon as the first run of JobA finished. The pipeline used: pipeline { agent any stages { stage( 'Hello' ) { steps { echo 'Hello World' } } stage( 'Wait' ) { steps { sh 'sleep 20' } } stage( 'Goodbye' ) { steps { echo 'Goodbye World' } } } } Of course this race condition essentially renders this plugin completely useless. We're currently planning to use lockable-resources as an alternative until this issue gets fixed, but lockable resources isn't ideal as the jobs at this point would take up an executor while waiting for the resources to be unlocked.
          Hosh made changes -
          Description Original: Hello,

          I have a bunch of jobs that I need to throttle so that only one of them is running at any given time. On our old Jenkins instance that we're migrating away from, we had been using throttle-concurrent-builds combined with it's categories feature to achieve this. We've attempted to replicate this, however, the difference now is that we're using job-dsl to generate the jobs, and using Jenkinsfiles to have our pipelines in code.

          On our new Jenkins instance however, this doesn't seem to be fully working. We are observing the following behaviour:

          Imagine there are 3 jobs, Jobs A through C

          We have Job A that is currently running. While Job A is running, any other jobs (that is B and C), get queued as expected.  Once Job A finishes, jobs B and C get scheduled straight away so that now both jobs are running.

          This is obviously not desired behaviour, and we only need to run a single job at a time.

          I've had a look through the config.xml that gets generated on our new instance, and compared it with our old instance and below is the diff:
          {code:java}
          diff --git 1/old.xml 2/new.xml
          index 9b0bb66..747ad8f 100644
          --- 1/old.xml
          +++ 2/new.xml
          @@ -1,15 +1,15 @@
           <?xml version="1.0"?>
           <properties>
          - <hudson.plugins.throttleconcurrents.ThrottleJobProperty plugin="throttle-concurrents@2.0.1">
          + <org.jenkinsci.plugins.workflow.job.properties.DisableConcurrentBuildsJobProperty/>
          + <hudson.plugins.throttleconcurrents.ThrottleJobProperty plugin="throttle-concurrents@2.0.3">
               <maxConcurrentPerNode>1</maxConcurrentPerNode>
               <maxConcurrentTotal>1</maxConcurrentTotal>
          - <categories class="java.util.concurrent.CopyOnWriteArrayList">
          - <string>my-category</string>
          + <categories>
          + <string>my-pipeline-category</string>
               </categories>
               <throttleEnabled>true</throttleEnabled>
               <throttleOption>category</throttleOption>
               <limitOneJobWithMatchingParams>false</limitOneJobWithMatchingParams>
          - <paramsToUseForLimit>PIPELINE_MODE</paramsToUseForLimit>
               <configVersion>1</configVersion>
             </hudson.plugins.throttleconcurrents.ThrottleJobProperty>
           </properties> {code}
          Couple of notes, the parameter `PIPELINE_MODE` has been removed on our new instance, but from looking at the code for the plugin, it looks to me like `paramsToUseForLimit` attribute is ignored when `limitOneJobWithMatchingParams` is set to false.

          Furthermore, I'm not quite sure where `DisableConcurrentBuildsJobProperty` comes from, and I'm not really sure if this is what could be causing it.

          Our Job DSL (which is pretty much the same for all jobs):
          {code:java}
          pipelineJob("myjob") {
              concurrentBuild(false)

              properties {
                  throttleJobProperty {
                      throttleEnabled(true)
                      throttleOption('category')
                      categories(['my-pipeline-category'])
                      maxConcurrentPerNode(1)
                      maxConcurrentTotal(1)
                      limitOneJobWithMatchingParams(false)
                      paramsToUseForLimit(null)
                      matrixOptions {
                          throttleMatrixBuilds(true)
                          throttleMatrixConfigurations(false)
                      }
                  }
              }
              
              definition {
                  cpsScm {
                      scm {
                          git {
                              remote {
                                  url('https://github.com/xxx/xxx.git&#39;)
                                  credentials('xxxx')
                              }
                          }
                      }

                      scriptPath('Jenkinsfile')
                  }
              }
          } {code}
           

          This looks to me like a bug, but I'm not entirely sure.
          New: Hello,

          I have a bunch of jobs that I need to throttle so that only one of them is running at any given time. On our old Jenkins instance that we're migrating away from, we had been using throttle-concurrent-builds combined with it's categories feature to achieve this. We've attempted to replicate this, however, the difference now is that we're using job-dsl to generate the jobs, and using Jenkinsfiles to have our pipelines in code.

          On our new Jenkins instance however, this doesn't seem to be fully working. We are observing the following behaviour:

          Imagine there are 3 jobs, Jobs A through C

          We have Job A that is currently running. While Job A is running, any other jobs (that is B and C), get queued as expected.  Once Job A finishes, jobs B and C get scheduled straight away so that now both jobs are running.

          This is obviously not desired behaviour, and we only need to run a single job at a time.

          -I've had a look through the config.xml that gets generated on our new instance, and compared it with our old instance and below is the diff:-
          {code:java}
          diff --git 1/old.xml 2/new.xml
          index 9b0bb66..747ad8f 100644
          --- 1/old.xml
          +++ 2/new.xml
          @@ -1,15 +1,15 @@
           <?xml version="1.0"?>
           <properties>
          - <hudson.plugins.throttleconcurrents.ThrottleJobProperty plugin="throttle-concurrents@2.0.1">
          + <org.jenkinsci.plugins.workflow.job.properties.DisableConcurrentBuildsJobProperty/>
          + <hudson.plugins.throttleconcurrents.ThrottleJobProperty plugin="throttle-concurrents@2.0.3">
               <maxConcurrentPerNode>1</maxConcurrentPerNode>
               <maxConcurrentTotal>1</maxConcurrentTotal>
          - <categories class="java.util.concurrent.CopyOnWriteArrayList">
          - <string>my-category</string>
          + <categories>
          + <string>my-pipeline-category</string>
               </categories>
               <throttleEnabled>true</throttleEnabled>
               <throttleOption>category</throttleOption>
               <limitOneJobWithMatchingParams>false</limitOneJobWithMatchingParams>
          - <paramsToUseForLimit>PIPELINE_MODE</paramsToUseForLimit>
               <configVersion>1</configVersion>
             </hudson.plugins.throttleconcurrents.ThrottleJobProperty>
           </properties>{code}
          -Couple of notes, the parameter `PIPELINE_MODE` has been removed on our new instance, but from looking at the code for the plugin, it looks to me like `paramsToUseForLimit` attribute is ignored when `limitOneJobWithMatchingParams` is set to false.-

          -Furthermore, I'm not quite sure where `DisableConcurrentBuildsJobProperty` comes from, and I'm not really sure if this is what could be causing it.-

          -Our Job DSL (which is pretty much the same for all jobs):-
          {code:java}
          pipelineJob("myjob") {
              concurrentBuild(false)

              properties {
                  throttleJobProperty {
                      throttleEnabled(true)
                      throttleOption('category')
                      categories(['my-pipeline-category'])
                      maxConcurrentPerNode(1)
                      maxConcurrentTotal(1)
                      limitOneJobWithMatchingParams(false)
                      paramsToUseForLimit(null)
                      matrixOptions {
                          throttleMatrixBuilds(true)
                          throttleMatrixConfigurations(false)
                      }
                  }
              }
              
              definition {
                  cpsScm {
                      scm {
                          git {
                              remote {
                                  url('https://github.com/xxx/xxx.git&#39;)
                                  credentials('xxxx')
                              }
                          }
                      }

                      scriptPath('Jenkinsfile')
                  }
              }
          } {code}
          -This looks to me like a bug, but I'm not entirely sure.-

          Steps to reproduce the issue have been added as a comment.

          Hosh added a comment -

          basil any thoughts on this? I've been trying to replicate the same behaviour while debugging the plugin in the hopes of fixing the issue, but I'm struggling to do so.

          Hosh added a comment - basil any thoughts on this? I've been trying to replicate the same behaviour while debugging the plugin in the hopes of fixing the issue, but I'm struggling to do so.

          Hosh added a comment -

          This job-dsl and pipeline seem to reproduce the issue quite nicely:

          job-dsl:

          for (int i in 0..<10) {
              pipelineJob("job${i}") {
                  displayName("Job ${i}")
                  triggers {
                      cron('H/2 * * * *')
                  }
                  definition {
                      cpsScm {
                          scm {
                              git {
                                  remote {
                                      url('git@github.com:hoshsadiq/parallel-test.git')
                                      credentials('git')
                                  }
                                  branches('*/main')
                              }
                          }
                          scriptPath('Jenkinsfile')
                      }
                  }
              }
          }

          pipeline:

          pipeline {
              agent any
          
              options {
                  disableConcurrentBuilds()
                  throttleJobProperty(
                      categories: ['test_a'],
                      throttleEnabled: true,
                      throttleOption: 'category'
                  )
              }
          
              stages {
                  stage('Hello') {
                      steps {
                          sh 'echo "hello world"'
                          sleep 130
                          sh 'echo "bye world"'
                      }
                  }
              }
          } 

          Exact same thing is happening. All the jobs get queued up, and a single job ends up running. Once it's

          Hosh added a comment - This job-dsl and pipeline seem to reproduce the issue quite nicely: job-dsl: for ( int i in 0..<10) { pipelineJob( "job${i}" ) { displayName( "Job ${i}" ) triggers { cron( 'H/2 * * * *' ) } definition { cpsScm { scm { git { remote { url( 'git@github.com:hoshsadiq/parallel-test.git' ) credentials( 'git' ) } branches( '*/main' ) } } scriptPath( 'Jenkinsfile' ) } } } } pipeline: pipeline { agent any options { disableConcurrentBuilds() throttleJobProperty( categories: [ 'test_a' ], throttleEnabled: true , throttleOption: 'category' ) } stages { stage( 'Hello' ) { steps { sh 'echo "hello world" ' sleep 130 sh 'echo "bye world" ' } } } } Exact same thing is happening. All the jobs get queued up, and a single job ends up running. Once it's

            Unassigned Unassigned
            thehosh Hosh
            Votes:
            2 Vote for this issue
            Watchers:
            7 Start watching this issue

              Created:
              Updated: