Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-44085

have a simple way to limit the number of parallel branches that run concurrently

      If you have a large number of branches for a parallel step you may want to be nice and just run a maximum of 10 branches at any one time.

      For a single pipeline project you could use the lockable-resource-plugin to achieve this but for a multi-branch you need to create a lock for each PR in advance. (You would want all PRs / SCM branch jobs) to be able to run 10 of the branches in parallel not just 10 across all of the the jobs.

      You could also use the throttle-concurrent-builds but that only works with nodes and has the same limitations - that the categories need to be set up ahead of time.

      Even if the categories/resources would not need to be setup ahead of time (e.g. enhance the creation of the resource to let it have a capacity of N) you would end up polluting the global configuration for something that is inherently Run based.

      Therefore it would be desirable to implement an extra option to the parallel step to limit that maximum number of branches that would be run at one time.

      e.g. given the following code only 10 branches would run at once.

      def branches [:]
      for (int i=0;i<1000;i++) {
        def thing = "$i"
        map[thing] = {
          echo "$thing"
        }
      }
      branches[maxConcurrent] = 10
      parallel branches
      

      the current workaround would be to use a BlockingDeque and manually adding in / removing values when starting and exiting the parallel block, or by using the waitUntil and something like an AtomicInteger

          [JENKINS-44085] have a simple way to limit the number of parallel branches that run concurrently

          James Nord added a comment -

          potential workaround:

          def branches = [:]
          
          // setup a latch
          MAX_CONCURRENT = 10
          latch = new java.util.concurrent.LinkedBlockingDeque(MAX_CONCURRENT)
          // put a number of items into the queue to allow that number of branches to run
          for (int i=0;i<MAX_CONCURRENT;i++) {
              latch.offer("$i")
          }
          
          for (int i=0; i < 500; i++) {
              def name = "$i"
              branches[name] = {
                  def thing = null
                  // this will not allow proceeding until there is something in the queue.
                  waitUntil {
                      thing = latch.pollFirst();
                      return thing != null;
                  }
                  try {
                      echo "Hello from $name"
                      sleep time: 5, unit: 'SECONDS'
                      echo "Goodbye from $name"
                  }
                  finally {
                     // put something back into the queue to allow others to proceed
                      latch.offer(thing)
                  }
              }
          }
          
          timestamps {
              parallel branches
          }
          

          James Nord added a comment - potential workaround: def branches = [:] // setup a latch MAX_CONCURRENT = 10 latch = new java.util.concurrent.LinkedBlockingDeque(MAX_CONCURRENT) // put a number of items into the queue to allow that number of branches to run for ( int i=0;i<MAX_CONCURRENT;i++) { latch.offer( "$i" ) } for ( int i=0; i < 500; i++) { def name = "$i" branches[name] = { def thing = null // this will not allow proceeding until there is something in the queue. waitUntil { thing = latch.pollFirst(); return thing != null ; } try { echo "Hello from $name" sleep time: 5, unit: 'SECONDS' echo "Goodbye from $name" } finally { // put something back into the queue to allow others to proceed latch.offer(thing) } } } timestamps { parallel branches }

          Mor L added a comment - - edited

          I think the suggestion to have it as a parameter to the parallel step as suggested in https://issues.jenkins-ci.org/browse/JENKINS-46236 is better than defining it in the branches/jobs map. would love to see it implemented - and thanks for the workaround.

          Mor L added a comment - - edited I think the suggestion to have it as a parameter to the parallel step as suggested in https://issues.jenkins-ci.org/browse/JENKINS-46236  is better than defining it in the branches/jobs map. would love to see it implemented - and thanks for the workaround.

          Tom Larrow added a comment -

          Agreed, some simple syntax to limit the parallelization would be great, because when you start dynamically building parallel steps, it is easy to get out of control...

           

          Tom Larrow added a comment - Agreed, some simple syntax to limit the parallelization would be great, because when you start dynamically building parallel steps, it is easy to get out of control...  

          This would be great to have, we have a similar case where parallel execution is fine for some systems, but falling apart on others because of how many stages are in the parallel execution. Being able to throttle a max current execution would be very beneficial.

          Jared Kauppila added a comment - This would be great to have, we have a similar case where parallel execution is fine for some systems, but falling apart on others because of how many stages are in the parallel execution. Being able to throttle a max current execution would be very beneficial.

          Ray Burgemeestre added a comment - - edited

          Hey,

          Just wanted to post here an alternative solution. I was using James Nord' workaround, and it worked fine.

          However at work they pointed me to a different solution, even though it requires a plugin, it requires a bit less magic:

          https://wiki.jenkins.io/display/JENKINS/Lockable+Resources+Plugin

          In my case I configured in http://jenkins/configure 5 lockable resources with label XYZ. Then the code looked something like this:

          def tests = [:]
          for (...) {
              def test_num="$i"
              tests["$test_num"] = {
                  lock(label: "XYZ", quantity: 1, variable: "LOCKED") {
                      println "Locked resource: ${env.LOCKED}"
                      build(job: jobName, wait: true, parameters: parameters)
                  }
              }
          }
          parallel tests 

          Ray Burgemeestre added a comment - - edited Hey, Just wanted to post here an alternative solution. I was using James Nord' workaround, and it worked fine. However at work they pointed me to a different solution, even though it requires a plugin, it requires a bit less magic: https://wiki.jenkins.io/display/JENKINS/Lockable+Resources+Plugin In my case I configured in http://jenkins/configure 5 lockable resources with label XYZ. Then the code looked something like this: def tests = [:] for (...) {     def test_num= "$i"     tests[ "$test_num" ] = {         lock(label: "XYZ" , quantity: 1, variable: "LOCKED" ) {         println "Locked resource: ${env.LOCKED}"         build(job: jobName, wait: true , parameters: parameters)     } } } parallel tests

          Mor L added a comment -

          rayburgemeestre the problem with this approach from my end is that you have to define the lockable resources/quantities before hand.

          If you want to dynamically limit based on input parameters for example - you can't use this method (well, you can - but up to an extent).

          I still think this is a needed feature (solution as suggested in https://issues.jenkins-ci.org/browse/JENKINS-46236)

          Mor L added a comment - rayburgemeestre the problem with this approach from my end is that you have to define the lockable resources/quantities before hand. If you want to dynamically limit based on input parameters for example - you can't use this method (well, you can - but up to an extent). I still think this is a needed feature (solution as suggested in  https://issues.jenkins-ci.org/browse/JENKINS-46236 )

          Eugene G added a comment -

          Thank you, rayburgemeestre For me your solution works like a charm and it is much better than nothing:

           

           

          def generateStage(job) {
              return {
                  stage("Build: ${job}") {
                      // https://issues.jenkins-ci.org/browse/JENKINS-44085?focusedCommentId=346951&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-346951
                      lock(label: "throttle_parallel", quantity: 1, variable: "LOCKED") {
                          println "${job} locked resource: ${env.LOCKED}"
                          my_jobs_map[job].build() // my own code
                      }
                  }
              }
          }
          
          pipeline {
              stages {
                  stage("Build") {
                      steps {
                          script {
                              parallel_jobs = ["Job1", "Job2", "Job3", "Job4", "Job5", "Job6", "Job7"] //pre-generated
                              println "=======[ Parallel Jobs: ${parallel_jobs} ]======="
                              parallelStagesMap = parallel_jobs.collectEntries {
                                  ["${it}" : generateStage(it)]
                              }
                              timestamps {
                                  parallel parallelStagesMap
                              }
                          }
                      }
                  }
              }
          }

           

          Eugene G added a comment - Thank you, rayburgemeestre !  For me your solution works like a charm and it is much better than nothing:     def generateStage(job) { return { stage( "Build: ${job}" ) { // https://issues.jenkins-ci.org/browse/JENKINS-44085?focusedCommentId=346951&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-346951 lock(label: "throttle_parallel" , quantity: 1, variable: "LOCKED" ) { println "${job} locked resource: ${env.LOCKED}" my_jobs_map[job].build() // my own code } } } } pipeline { stages { stage( "Build" ) { steps { script { parallel_jobs = [ "Job1" , "Job2" , "Job3" , "Job4" , "Job5" , "Job6" , "Job7" ] //pre-generated println "=======[ Parallel Jobs: ${parallel_jobs} ]=======" parallelStagesMap = parallel_jobs.collectEntries { [ "${it}" : generateStage(it)] } timestamps { parallel parallelStagesMap } } } } } }  

          Antoine Lemieux added a comment - - edited

          Throwing some ideas that would be very helpful in our current context.  We are running test in batch using docker containers...  however we do start too much of them at once causing our disk to spike.   I am looking for a way to  throttle or warmup the start of those containers (100+) in a more linear manner like one every 1sec instead of all 100 at once.

          I think that adding parameters to parallel could be very powerful and serve many use-cases.

          parallel(closures: testing_closures, maxThreadCount: 3, rampupTime: 10s)

          +1

           

          Ref: https://issues.jenkins-ci.org/browse/JENKINS-46236

           

          Thanks

          Antoine

          Antoine Lemieux added a comment - - edited Throwing some ideas that would be very helpful in our current context.  We are running test in batch using docker containers...  however we do start too much of them at once causing our disk to spike.   I am looking for a way to   throttle or warmup  the start of those containers (100+) in a more linear manner like one every 1sec instead of all 100 at once. I think that adding parameters to  parallel could be very powerful and serve many use-cases. parallel(closures: testing_closures, maxThreadCount: 3, rampupTime: 10s) +1   Ref:  https://issues.jenkins-ci.org/browse/JENKINS-46236   Thanks Antoine

          Mario Nitsch added a comment -

          I'm really happy with our solution that is based on a LinkedBlockingQueue.

          def test = ["a","b","c","d","e","f"]
          parallelLimitedBranches(test) {
            echo "processing ${it.value} on branch ${it.branch}
          }
          
          def parallelLimitedBranches(Collection<Object> elements, Integer maxConcurrentBranches = 4, Boolean failFast = false , Closure body) {
            def branches = [:]
            def latch = new java.util.concurrent.LinkedBlockingQueue(elements)
            maxConcurrentBranches.times {
              branches["$it"] = {
                def thing = latch.poll();
                while (thing != null) {
                  body(['value': thing, 'branch': it])
                  thing = latch.poll();
                }
              }
            }
            branches.failFast = failFast
          
            parallel branches
          }
          

          Mario Nitsch added a comment - I'm really happy with our solution that is based on a LinkedBlockingQueue. def test = [ "a" , "b" , "c" , "d" , "e" , "f" ] parallelLimitedBranches(test) { echo "processing ${it.value} on branch ${it.branch} } def parallelLimitedBranches(Collection< Object > elements, Integer maxConcurrentBranches = 4, Boolean failFast = false , Closure body) { def branches = [:] def latch = new java.util.concurrent.LinkedBlockingQueue(elements) maxConcurrentBranches.times { branches[ "$it" ] = { def thing = latch.poll(); while (thing != null ) { body([ 'value' : thing, 'branch' : it]) thing = latch.poll(); } } } branches.failFast = failFast parallel branches }

          jerry wiltse added a comment -

          I had exactly the same need, and I think this feature request has a lot of merit and is going to keep coming up for advanced use cases.  Unfortunately, I think implementing the feature as-requested is infeasible, so I make a recommendation here for an alternative approach that might actually be actionable for the Jenkins team. 

          Workarounds: 

          I tried implementing the workarounds proposed here.  They are clever, but fundamentally subvert the desired structure of the job logs and UI.  They create a number of numbered stages corresponding to the max number of parallel tasks desired by the user, and then use those as "quasi-executors" with LinkedBlockingQueue scheduling the actual tasks.  Therefor, it does not produce "1-stage-per-task" in the log or the UI.  That is what we get if we loop over a list and dynamically create a stage for each item.  We just want the stages to be run in parallel and throttled.  

          We could almost implement the desired behavior in a pipeline today in a relatively simple and intuitive way  We could use a very similar LinkedBlockingQueue strategy and gradually spawn stages inside the parallel block as we dequeue through our queue. Almost. 

          Fundamental Problem:

          The implementation of the parallel block is such that all the stages executed inside the parallel block must be passed to the ParallelStep constructor (and thus known at the start of the block).  The Map of all closures to be executed is marked final, so new stages cannot be dynamically started inside the parallel block once it has been constructed no matter what.

          https://github.com/jenkinsci/pipeline-plugin/blob/workflow-1.15/cps/src/main/java/org/jenkinsci/plugins/workflow/cps/steps/ParallelStep.java#L47

          The current ParallelStep code is part of the very fundamental pipeline-plugin, and the code required to implement the desired behavior is going to require a significant amount of complexity.  Thus, it seems unlikely that anyone can safely or comfortably modify this plugin in such a way that supports this feature request.  This is likely why Jenkins team has not touched this ticket since it was opened in 2017, and why the plugin itself hasn't changed since 2016. 

          Suggestions:

          Give us a new ParallelDynamic step to expose Jenkins parallel capabilities for more general use.  Make it virtually identical to Parallel in most ways, but supports adding closures to the map dynamically at runtime.  This approach is flexible, as it would empower users implement their own scheduling strategies outside parallel block.  In theory, adding a single function to the API such as "parallel.dynamicStage(newStage)" would likely be enough to satisfy many use cases.  This minimalistic approach avoids getting into the somewhat subjective design space about how general-purpose throttling should be implemented at the start.  Later, someone could then choose to provide a default throttled implementation on top if that is still desired, but I don't think it's necessary at first. 

          jerry wiltse added a comment - I had exactly the same need, and I think this feature request has a lot of merit and is going to keep coming up for advanced use cases.  Unfortunately, I think implementing the feature as-requested is infeasible, so I make a recommendation here for an alternative approach that might actually be actionable for the Jenkins team.  Workarounds:   I tried implementing the workarounds proposed here.  They are clever, but fundamentally subvert the desired structure of the job logs and UI.  They create a number of numbered stages corresponding to the max number of parallel tasks desired by the user, and then use those as "quasi-executors" with LinkedBlockingQueue scheduling the actual tasks.  Therefor, it does not produce "1-stage-per-task" in the log or the UI.  That is what we get if we loop over a list and dynamically create a stage for each item.  We just want the stages to be run in parallel and throttled.   We could almost implement the desired behavior in a pipeline today in a relatively simple and intuitive way  We could use a very similar LinkedBlockingQueue strategy and gradually spawn stages inside the parallel block as we dequeue through our queue. Almost.  Fundamental Problem: The implementation of the parallel block is such that all the stages executed inside the parallel block must be passed to the ParallelStep constructor (and thus known at the start of the block).  The Map of all closures to be executed is marked final, so new stages cannot be dynamically started inside the parallel block once it has been constructed no matter what. https://github.com/jenkinsci/pipeline-plugin/blob/workflow-1.15/cps/src/main/java/org/jenkinsci/plugins/workflow/cps/steps/ParallelStep.java#L47 The current ParallelStep code is part of the very fundamental pipeline-plugin, and the code required to implement the desired behavior is going to require a significant amount of complexity.  Thus, it seems unlikely that anyone can safely or comfortably modify this plugin in such a way that supports this feature request.  This is likely why Jenkins team has not touched this ticket since it was opened in 2017, and why the plugin itself hasn't changed since 2016.  Suggestions: Give us a new ParallelDynamic step to expose Jenkins parallel capabilities for more general use.  Make it virtually identical to Parallel in most ways, but supports adding closures to the map dynamically at runtime.  This approach is flexible, as it would empower users implement their own scheduling strategies outside parallel block.  In theory, adding a single function to the API such as "parallel.dynamicStage(newStage)" would likely be enough to satisfy many use cases.  This minimalistic approach avoids getting into the somewhat subjective design space about how general-purpose throttling should be implemented at the start.  Later, someone could then choose to provide a default throttled implementation on top if that is still desired, but I don't think it's necessary at first. 

          James Nord added a comment -

          > They are clever, but fundamentally subvert the desired structure of the job logs and UI.  They create a number of numbered stages corresponding to the max number of parallel tasks desired by the user

           

          I think you missed my workaround or missunderstood it.  the name is entirely up to what you provied I just use an integer as an example.  it does create subtasks correctly and they are show correctly in blue ocean (as best blue ocean will display a big set of parallel tasks).

           

          this is solvable without the dynamicness you allude to and if you want to be able to do that then you should probably file a distinct issue.

          James Nord added a comment - > They are clever, but fundamentally subvert the desired structure of the job logs and UI.  They create a number of numbered stages corresponding to the max number of parallel tasks desired by the user   I think you missed my workaround or missunderstood it.  the name is entirely up to what you provied I just use an integer as an example.  it does create subtasks correctly and they are show correctly in blue ocean (as best blue ocean will display a big set of parallel tasks).   this is solvable without the dynamicness you allude to and if you want to be able to do that then you should probably file a distinct issue.

          jerry wiltse added a comment - - edited

          Apologies, I've just implemented yours and I did indeed misunderstand something fundamental to your approach.  Even though the Map of closures is Final, inserting new key/value pairs at runtime does seem to create new stages on the fly .  I read yours and then read nitschsb's example right after.  I believed his was just a more polished implementation of the same technique.  In particular, his avoided the use of waitUntil. 

           

          After running your implementation, I do see that waitUntil is a bit of a problem from the UX point of view.  Do you think there's anything we can do to avoid this?  With really long jobs, this isn't really going to be manageable/acceptable to have all these waits. 

           

          If we can avoid this, it should be a workable solution.  

           

           

           

          jerry wiltse added a comment - - edited Apologies, I've just implemented yours and I did indeed misunderstand something fundamental to your approach.  Even though the Map of closures is Final, inserting new key/value pairs at runtime does seem to create new stages on the fly .  I read yours and then read  nitschsb 's example right after.  I believed his was just a more polished implementation of the same technique.  In particular, his avoided the use of waitUntil.    After running your implementation, I do see that waitUntil is a bit of a problem from the UX point of view.  Do you think there's anything we can do to avoid this?  With really long jobs, this isn't really going to be manageable/acceptable to have all these waits.    If we can avoid this, it should be a workable solution.        

          James Nord added a comment -

          was that screensho from the classic pipeline steps view?

          in which I doubt there is much possible in the workaround.  I find that view not really good to visualize the pipeline (it visualised steps as opposed to Blue ocean which visualised stages with steps).

           

          James Nord added a comment - was that screensho from the classic pipeline steps view? in which I doubt there is much possible in the workaround.  I find that view not really good to visualize the pipeline (it visualised steps as opposed to Blue ocean which visualised stages with steps).  

          James Nord added a comment -

          AHH..

          untested but change pollFirst to take and it should sort out all the unnecessary steps.

           

          one thing to look out for would be that if you abort the pipeline that all the branches fail quickly.

           

          James Nord added a comment - AHH.. untested but change pollFirst to take and it should sort out all the unnecessary steps.   one thing to look out for would be that if you abort the pipeline that all the branches fail quickly.  

          jerry wiltse added a comment -

          yes, it is.  blue ocean is nice, but this is the default UI/UX and it's what most users in my company still look at out of habit.  Is there no native java or groovy "wait" function which isn't a pipeline function?

          jerry wiltse added a comment - yes, it is.  blue ocean is nice, but this is the default UI/UX and it's what most users in my company still look at out of habit.  Is there no native java or groovy "wait" function which isn't a pipeline function?

          James Nord added a comment - - edited

          with take you can possibly remove the waitUntil entirely thus making the step representation cleaner (although you would loose the amount of time spent waiting)

          in fact you could argue that the waitUntil should not be represented as multiple steps whilst waiting but just the one..

           

          James Nord added a comment - - edited with take you can possibly remove the waitUntil entirely thus making the step representation cleaner (although you would loose the amount of time spent waiting) in fact you could argue that the waitUntil should not be represented as multiple steps whilst waiting but just the one..  

          jerry wiltse added a comment - - edited

          I tried doing the following replacement and the first batch of branches run for 15 minutes and then print goodbye.   All the other branches fail with stacktraces with the stacktraces below.  Having never worked with the blocking queue api, i don't have a good guess as to what the problem is: 

           

          Replaced this:

          def thing = null
          waitUntil{ 
            thing = latch.pollFirst(); 
            return thing != null; 
          }
          

           

          With this:

          def thing = latch.take()

           

          what should it be?

           

          Also:   java.lang.InterruptedException
          		at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
          		at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2048)
          		at java.util.concurrent.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:492)
          		at java.util.concurrent.LinkedBlockingDeque.take(LinkedBlockingDeque.java:680)
          		at sun.reflect.GeneratedMethodAccessor5683.invoke(Unknown Source)
          Also:   java.lang.InterruptedException
          		at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
          		at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2048)
          		at java.util.concurrent.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:492)
          		at java.util.concurrent.LinkedBlockingDeque.take(LinkedBlockingDeque.java:680)
          		at sun.reflect.GeneratedMethodAccessor5683.invoke(Unknown Source)
          

          jerry wiltse added a comment - - edited I tried doing the following replacement and the first batch of branches run for 15 minutes and then print goodbye.   All the other branches fail with stacktraces with the stacktraces below.  Having never worked with the blocking queue api, i don't have a good guess as to what the problem is:    Replaced this: def thing = null waitUntil{ thing = latch.pollFirst(); return thing != null ; }   With this: def thing = latch.take()   what should it be?   Also: java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2048) at java.util.concurrent.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:492) at java.util.concurrent.LinkedBlockingDeque.take(LinkedBlockingDeque.java:680) at sun.reflect.GeneratedMethodAccessor5683.invoke(Unknown Source) Also: java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2048) at java.util.concurrent.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:492) at java.util.concurrent.LinkedBlockingDeque.take(LinkedBlockingDeque.java:680) at sun.reflect.GeneratedMethodAccessor5683.invoke(Unknown Source)

          James Nord added a comment -

          take is not going to work as it will block the cps thread.

          I should have known that yesterday

          James Nord added a comment - take is not going to work as it will block the cps thread. I should have known that yesterday

          jerry wiltse added a comment -

          This naive code seems to work fine, can you think of any issue with using it? 

          while(true) {
           thing = latch.pollFirst();
           if (thing != null){
           break;
           }
           }

          jerry wiltse added a comment - This naive code seems to work fine, can you think of any issue with using it?  while ( true ) { thing = latch.pollFirst(); if (thing != null ){ break ; } }

          James Nord added a comment -
          This naive code seems to work fine, can you think of any issue with using it?  

          CPU usage as you are in a tight spin loop whilst waiting that will spin the CPS thread causing other issues..

          James Nord added a comment - This naive code seems to work fine, can you think of any issue with using it?   CPU usage as you are in a tight spin loop whilst waiting that will spin the CPS thread causing other issues..

          Sam Gleske added a comment -

          Added lockable-resources-plugin as a potential dependency since a semaphore step could be implemented.

          Semaphore-like behavior with lock step

          Can be achieved by calculating lock names using modulo operator to cycle through an integer. Here's an example using rainbow colors.

          int concurrency = 3
          List colors = ['red', 'orange', 'yellow', 'green', 'blue', 'indigo', 'violet']
          Map tasks = [failFast: false]
          for(int i=0; i<colors.size(); i++) {
              String color = colors[i]
              int lock_id = i % concurrency
              tasks["Code ${color}"] = { ->
                  stage("Code ${color}") {
                      lock("color-lock-${lock_id}") {
                          echo "This color is ${color}"
                          sleep 30
                      }
                  }
              }
          
          }
          // execute the tasks in parallel with concurrency limits
          stage("Rainbow") {
              parallel(tasks)
          }
          

          The above code will execute 7 stages in parallel; however it will not run more than 3 concurrently.

          The above will create custom locks:

          • color-lock-0
          • color-lock-1
          • color-lock-2

          All concurrent tasks will race for one of the three locks. It's not perfectly efficient (certainly not as efficient as a real semaphore) and there are some limitations.

          Limitations with this workaround

          Your pipeline will take as long as your slowest locks. So if you unfortunately have several long running jobs racing for the same lock (e.g. color-lock-1), then your pipeline could be longer than if it were a proper semaphore.

          Example scenario with the three locks:

          • color-lock-0 takes 20 seconds to cycle through all jobs.
          • color-lock-1 takes 30 minutes to cycle through all jobs.
          • color-lock-2 takes 2 minutes to cycle through all jobs.

          Then your job will take 30 minutes to run... where as with a true semaphore it would have been much faster because the longer running jobs would take the next available lock in the semaphore rather than be blocked.

          Sam Gleske added a comment - Added lockable-resources-plugin as a potential dependency since a semaphore step could be implemented. Semaphore-like behavior with lock step Can be achieved by calculating lock names using modulo operator to cycle through an integer. Here's an example using rainbow colors. int concurrency = 3 List colors = [ 'red' , 'orange' , 'yellow' , 'green' , 'blue' , 'indigo' , 'violet' ] Map tasks = [failFast: false ] for ( int i=0; i<colors.size(); i++) { String color = colors[i] int lock_id = i % concurrency tasks[ "Code ${color}" ] = { -> stage( "Code ${color}" ) { lock( "color-lock-${lock_id}" ) { echo "This color is ${color}" sleep 30 } } } } // execute the tasks in parallel with concurrency limits stage( "Rainbow" ) { parallel(tasks) } The above code will execute 7 stages in parallel; however it will not run more than 3 concurrently. The above will create custom locks: color-lock-0 color-lock-1 color-lock-2 All concurrent tasks will race for one of the three locks. It's not perfectly efficient (certainly not as efficient as a real semaphore) and there are some limitations. Limitations with this workaround Your pipeline will take as long as your slowest locks. So if you unfortunately have several long running jobs racing for the same lock (e.g. color-lock-1 ), then your pipeline could be longer than if it were a proper semaphore. Example scenario with the three locks: color-lock-0 takes 20 seconds to cycle through all jobs. color-lock-1 takes 30 minutes to cycle through all jobs. color-lock-2 takes 2 minutes to cycle through all jobs. Then your job will take 30 minutes to run... where as with a true semaphore it would have been much faster because the longer running jobs would take the next available lock in the semaphore rather than be blocked.

          jerry wiltse added a comment - - edited

          I don't understand this most recent suggestion. I've never worked with semaphores, proper or otherwise. The drawbacks already sound significant and prohibitive.

          teilo we're still stuck on this.

          Our current implementation adds 25%-75% to the runtime of a parallel job in our bigger jobs, presumably because of the CPU monopolization that you mentioned. Right now, we simply can't use it, and so we still don't have a throttling mechanism for our case.

          If waitUntil() could be changed to be a single step as you mentioned, that could solve it for us with less custom code, that would be desirable. I'd be surprised if anyone would disagree with the premise of that request. Where could i file the feature request for that step? I don't know what plugin it would be part of.

          In the meantime, do you have any other suggestions on how to fix our current implementation so that it doesn't kill performance?

              static def parallelLimitedBranches(
                      CpsScript currentJob,
                      List<String> items,
                      Integer maxConcurrentBranches,
                      Boolean failFast = false,
                      Closure body) {
          
                  def branches = [:]
                  Deque latch = new LinkedBlockingDeque(maxConcurrentBranches)
                  maxConcurrentBranches.times {
                      latch.offer("$it")
                  }
          
                  items.each {
                      branches["${it}"] = {
                          def queueSlot = null
                          while (true) {
                              queueSlot = latch.pollFirst();
                              if (queueSlot != null) {
                                  break;
                              }
                          }
                          try {
                              body(it)
                          }
                          finally {
                              latch.offer(queueSlot)
                          }
                      }
                  }
          
                  currentJob.parallel(branches)
              }
          

          With calling code in the form of:

              parallelLimitedBranches(currentJob, uuids, maxParallelBranches, false) { String uuid ->
              currentJob.stage("${uuid.trim()}") {
                  currentJob.node(parallelAgentLabel) {
                      echo "somevalue"
                  }
              }
          
          

          jerry wiltse added a comment - - edited I don't understand this most recent suggestion. I've never worked with semaphores, proper or otherwise. The drawbacks already sound significant and prohibitive. teilo we're still stuck on this. Our current implementation adds 25%-75% to the runtime of a parallel job in our bigger jobs, presumably because of the CPU monopolization that you mentioned. Right now, we simply can't use it, and so we still don't have a throttling mechanism for our case. If waitUntil() could be changed to be a single step as you mentioned, that could solve it for us with less custom code, that would be desirable. I'd be surprised if anyone would disagree with the premise of that request. Where could i file the feature request for that step? I don't know what plugin it would be part of. In the meantime, do you have any other suggestions on how to fix our current implementation so that it doesn't kill performance? static def parallelLimitedBranches( CpsScript currentJob, List< String > items, Integer maxConcurrentBranches, Boolean failFast = false , Closure body) { def branches = [:] Deque latch = new LinkedBlockingDeque(maxConcurrentBranches) maxConcurrentBranches.times { latch.offer( "$it" ) } items.each { branches[ "${it}" ] = { def queueSlot = null while ( true ) { queueSlot = latch.pollFirst(); if (queueSlot != null ) { break ; } } try { body(it) } finally { latch.offer(queueSlot) } } } currentJob.parallel(branches) } With calling code in the form of: parallelLimitedBranches(currentJob, uuids, maxParallelBranches, false ) { String uuid -> currentJob.stage( "${uuid.trim()}" ) { currentJob.node(parallelAgentLabel) { echo "somevalue" } }

          Sam Gleske added a comment - - edited

          Using my "color" concurrent lock example...

              static def parallelLimitedBranches(
                      CpsScript currentJob,
                      List<String> items,
                      Integer maxConcurrentBranches,
                      Boolean failFast = false,
                      Closure body) {
          
                  Map branches = [failFast: failFast]
                  for(int i = 0; i < items.size(); i++) {
                      int lockId = i % maxConcurrentBranches
                      String itemValue = items[i]
                      branches[itemValue] = { ->
                          lock("${currentJob.rawBuild.parent.fullName}-${lockId}") {
                              body(itemValue)
                          }
                      }
                  }
          
                  currentJob.parallel(branches)
              }
          

          Make items a def so that it is more versatile. For example, each item can be a map from a matrix of items and not just a String.

          You're doing something weird with trying to manually select agents... don't do that. Use agent labels and label expressions instead. Rely on Jenkins to select an appropriate agent and only focus on executing your code on said agent.

          My previous example is not a true semaphore. All it does is generate a limited number of lock IDs using the modulo operator.

          Sam Gleske added a comment - - edited Using my "color" concurrent lock example... static def parallelLimitedBranches( CpsScript currentJob, List< String > items, Integer maxConcurrentBranches, Boolean failFast = false , Closure body) { Map branches = [failFast: failFast] for ( int i = 0; i < items.size(); i++) { int lockId = i % maxConcurrentBranches String itemValue = items[i] branches[itemValue] = { -> lock( "${currentJob.rawBuild.parent.fullName}-${lockId}" ) { body(itemValue) } } } currentJob.parallel(branches) } Make items a def so that it is more versatile. For example, each item can be a map from a matrix of items and not just a String. You're doing something weird with trying to manually select agents... don't do that. Use agent labels and label expressions instead. Rely on Jenkins to select an appropriate agent and only focus on executing your code on said agent. My previous example is not a true semaphore. All it does is generate a limited number of lock IDs using the modulo operator.

          jerry wiltse added a comment -

          Thanks for the effort!  i just updated my last post with the actual calling code rather than the thing with the agent labels. 

          Indeed, this semaphore-like strategy is another form of the previously posted suggestions with a limited number of named queues, where jobs get assigned to queues at the start and which suffers from the limitations/disadvantages you mentioned.  For our case, it's a lot less desirable than the current queuing strategy, if only the wait step didn't have the verbosity problem.  I'll continue to look for suggestions on making the current strategy work, and working around that problem. 

          jerry wiltse added a comment - Thanks for the effort!  i just updated my last post with the actual calling code rather than the thing with the agent labels.  Indeed, this semaphore-like strategy is another form of the previously posted suggestions with a limited number of named queues, where jobs get assigned to queues at the start and which suffers from the limitations/disadvantages you mentioned.  For our case, it's a lot less desirable than the current queuing strategy, if only the wait step didn't have the verbosity problem.  I'll continue to look for suggestions on making the current strategy work, and working around that problem. 

          Sam Gleske added a comment -

          No worries... in practice I haven't encountered the limitation I described. I still get the parallel speedup without completely taking over Jenkins infrastructure. So the limitation I posed is mostly hypothetical but may not actually impact your project. So it's at least worth trying out to see if you get any gains from it.

          Sam Gleske added a comment - No worries... in practice I haven't encountered the limitation I described. I still get the parallel speedup without completely taking over Jenkins infrastructure. So the limitation I posed is mostly hypothetical but may not actually impact your project. So it's at least worth trying out to see if you get any gains from it.

          Ronny Schuetz added a comment -

          Ronny Schuetz added a comment - Has anybody tried https://github.com/jenkinsci/concurrent-step-plugin already?

          jerry wiltse added a comment -

          No, it looks really good, unfortunately it says "can only be used in pipeline script" which seems to indicate declarative pipeline cannot use it. 

          jerry wiltse added a comment - No, it looks really good, unfortunately it says "can only be used in pipeline script" which seems to indicate declarative pipeline cannot use it. 

          Ronny Schuetz added a comment -

          Just gave https://github.com/jenkinsci/concurrent-step-plugin a try (it doesn't seem to be available somewhere, so I had to build it from the sources and tweak it a bit to start a newer Jenkins version): It seems to work fine inside script {} blocks of declarative pipelines.

          Ronny Schuetz added a comment - Just gave https://github.com/jenkinsci/concurrent-step-plugin a try (it doesn't seem to be available somewhere, so I had to build it from the sources and tweak it a bit to start a newer Jenkins version): It seems to work fine inside script {} blocks of declarative pipelines.

          jerry wiltse added a comment -

          Thanks for testing this out.  We might be able to give it a try if the author releases an official non-beta version in the near future.

          jerry wiltse added a comment - Thanks for testing this out.  We might be able to give it a try if the author releases an official non-beta version in the near future.

          Sam Gleske added a comment - - edited

          Background

          I think I've reached the limits of what's possible in native scripted pipeline without updating any plugins using lockable resources plugin as-is. Recently I answered a question around using lockable resources and lockable resource limits similar to this issue... I came up with a solution but it's still not great. I guess I need to look more into what it takes to develop this into a plugin. This is a significant gap in Jenkins' ability to do large depth parallelism while maintaining limits across a matrix of builds.

          You can see my reply which prompted me to develop this custom withLocks step.

          http://sam.gleske.net/blog/engineering/2020/03/29/jenkins-parallel-conditional-locks.html

          Custom step source

          withLocks custom pipeline step for shared pipeline libraries.

          Usage of custom step

          Obtain two locks.

          withLocks(['foo', 'bar']) {
              // some code runs after both foo and bar locks are obtained
          }
          

          Obtain one lock with parallel limits. The index gets evaluated against the limit in order to limit parallelism with modulo operation. Similar to workaround my color-lock example.

          Note: if you specify multiple locks with limit and index, then the same limits apply to all locks. The next example will show how to limit specific locks without setting limits for all locks.

          Map tasks = [failFast: true]
          for(int i = 0; i < 5; i++) {
              int taskInt = i
              tasks["Task ${taskInt}"] = {
                  stage("Task ${taskInt}") {
                      withLocks(obtain_lock: 'foo', limit: 3, index: taskInt) {
                          echo 'This is an example task being executed'
                          sleep(30)
                      }
                      echo 'End of task execution.'
                  }
              }
          }
          stage("Parallel tasks") {
              parallel(tasks)
          }
          

          Obtain obtain the foo and bar locks. Only proceed if both locks have been obtained simultaneously. However, set foo locks to be limited by 3 simultaneous possible locks. When specifying multiple locks you can pass in the setting with lock name plus _limit and _index to define behavior for just that lock.

          In the following scenario, the first three locks will race for foo lock with limits and wait on bar for execution. The remaining two tasks will wait on just foo with limits. As an ordering recommendation, in the locks list, foo is first item so that any limited tasks not blocked by bar can execute right away.

          Please note: when using multiple locks this way there's actually a performance difference between the order in the list of foo or bar versus reversing the order. I have no control over this and just appears to be a severe limitation in how pipeline handles CPS sequence.

          Map tasks = [failFast: true]
          for(int i = 0; i < 5; i++) {
              int taskInt = i
              tasks["Task ${taskInt}"] = {
                  List locks = ['foo', 'bar']
                  if(taskInt > 2) {
                      locks = ['foo']
                  }
                  stage("Task ${taskInt}") {
                      withLocks(obtain_lock: locks, foo_limit: 3, foo_index: taskInt) {
                          echo 'This is an example task being executed'
                          sleep(30)
                      }
                      echo 'End of task execution.'
                  }
              }
          }
          stage("Parallel tasks") {
              parallel(tasks)
          }
          

          You may need to quote the setting depending on the characters used. For example, if you have a lock named with a special character other than an underscore, then it must be quoted.

          withLocks(obtain_lock: ['hello-world'], 'hello-world_limit': 3, ...) ...
          

          If you want locks printed out for debugging purposes you can use the printLocks option. It simply echos out the locks it will attempt to obtain in the parallel stage.

          withLocks(..., printLocks: true, ...) ...
          

          Sam Gleske added a comment - - edited Background I think I've reached the limits of what's possible in native scripted pipeline without updating any plugins using lockable resources plugin as-is. Recently I answered a question around using lockable resources and lockable resource limits similar to this issue... I came up with a solution but it's still not great. I guess I need to look more into what it takes to develop this into a plugin. This is a significant gap in Jenkins' ability to do large depth parallelism while maintaining limits across a matrix of builds. You can see my reply which prompted me to develop this custom withLocks step. http://sam.gleske.net/blog/engineering/2020/03/29/jenkins-parallel-conditional-locks.html Custom step source withLocks custom pipeline step for shared pipeline libraries. Usage of custom step Obtain two locks. withLocks(['foo', 'bar']) { // some code runs after both foo and bar locks are obtained } Obtain one lock with parallel limits. The index gets evaluated against the limit in order to limit parallelism with modulo operation. Similar to workaround my color-lock example. Note: if you specify multiple locks with limit and index, then the same limits apply to all locks. The next example will show how to limit specific locks without setting limits for all locks. Map tasks = [failFast: true] for(int i = 0; i < 5; i++) { int taskInt = i tasks["Task ${taskInt}"] = { stage("Task ${taskInt}") { withLocks(obtain_lock: 'foo', limit: 3, index: taskInt) { echo 'This is an example task being executed' sleep(30) } echo 'End of task execution.' } } } stage("Parallel tasks") { parallel(tasks) } Obtain obtain the foo and bar locks. Only proceed if both locks have been obtained simultaneously. However, set foo locks to be limited by 3 simultaneous possible locks. When specifying multiple locks you can pass in the setting with lock name plus _limit and _index to define behavior for just that lock. In the following scenario, the first three locks will race for foo lock with limits and wait on bar for execution. The remaining two tasks will wait on just foo with limits. As an ordering recommendation, in the locks list, foo is first item so that any limited tasks not blocked by bar can execute right away. Please note: when using multiple locks this way there's actually a performance difference between the order in the list of foo or bar versus reversing the order. I have no control over this and just appears to be a severe limitation in how pipeline handles CPS sequence. Map tasks = [failFast: true] for(int i = 0; i < 5; i++) { int taskInt = i tasks["Task ${taskInt}"] = { List locks = ['foo', 'bar'] if(taskInt > 2) { locks = ['foo'] } stage("Task ${taskInt}") { withLocks(obtain_lock: locks, foo_limit: 3, foo_index: taskInt) { echo 'This is an example task being executed' sleep(30) } echo 'End of task execution.' } } } stage("Parallel tasks") { parallel(tasks) } You may need to quote the setting depending on the characters used. For example, if you have a lock named with a special character other than an underscore, then it must be quoted. withLocks(obtain_lock: ['hello-world'], 'hello-world_limit': 3, ...) ... If you want locks printed out for debugging purposes you can use the printLocks option. It simply echos out the locks it will attempt to obtain in the parallel stage. withLocks(..., printLocks: true, ...) ...

          Michael Who added a comment -

          I am wondering if withLocks is being used whether the thread is still created but stay being blocked until the lock is released. Probably the usage of a queue is still necessary.

          Michael Who added a comment - I am wondering if withLocks is being used whether the thread is still created but stay being blocked until the lock is released. Probably the usage of a queue is still necessary.

          Sam Gleske added a comment -

          From what I can tell in source code it doesn't actually create a concurrency lock.  Lockable resources plugin queues "threads" as lightweight jobs.  Eventually, the jobs get scheduled.  So it doesn't work in the traditional sense of what you think of as locks in concurrent high performance programming.

          Sam Gleske added a comment - From what I can tell in source code it doesn't actually create a concurrency lock.  Lockable resources plugin queues "threads" as lightweight jobs.  Eventually, the jobs get scheduled.  So it doesn't work in the traditional sense of what you think of as locks in concurrent high performance programming.

          This looks like what the https://github.com/jenkinsci/concurrent-step-plugin wants to achieve. locakable-resources is more for locks between different jobs/runs. cuncurrent-step exposes Java concurrent primitives inside a pipeline run.

          Tobias Gruetzmacher added a comment - This looks like what the https://github.com/jenkinsci/concurrent-step-plugin wants to achieve. locakable-resources is more for locks between different jobs/runs. cuncurrent-step exposes Java concurrent primitives inside a pipeline run.

          jerry wiltse added a comment -

          Yes, i opened issue in February asking for release here:  https://github.com/jenkinsci/concurrent-step-plugin/issues/5

          jerry wiltse added a comment - Yes, i opened issue in February asking for release here:   https://github.com/jenkinsci/concurrent-step-plugin/issues/5

          jerry wiltse added a comment -

          jerry wiltse added a comment - Officially released!   https://github.com/jenkinsci/concurrent-step-plugin/issues/5#issuecomment-627217753

          bright.ma added a comment - - edited

          here is my workaround:

          I transform the task list to  2D list .

           

          // code placeholder
                 stage('4. parallel') {
          
                      steps {
                          echo "will run for loop to run scripts on more node"
                          script {
                              def nodeListList = [
                                         ["bf-01", "bf-02", "bf-03"], 
                                         ["bf-03", "bf-02", "bf-01"],
                                       ]                    
                              
                              nodeListList.eachWithIndex { nodeList, i -> // loop the 2D list
                                  stage("pre nodeList ${i}") {
                                      echo "pre stage nodeList: ${nodeList}  ${i}"
                                  }
          
                                  stage("loop nodeList ${i}") {
                                      def jobs = [:]                            
                                      nodeList.eachWithIndex { nodeName, j ->
                                          jobs["on_${nodeName}_${i}_${j}"] = {
                                              node(nodeName) {
                                                  stage("run ${i}_${j}") {
                                                      echo "${nodeName}-${i}-${j}, ${NODE_NAME}, ${env.WORKSPACE}"
                                                      sh   "sleep 60"
                                                  }
                                              }//end node()
                                          }//end jobs[]
                                      } // end nodeList.eachWithIndex 
                                      
                                      println "this jobs is: " + jobs
                                      parallel jobs
                                  }// end stage("loop nodeList")
          
                                  stage("post nodeList  ${i}") {
                                      echo "post stage nodeList: ${nodeList}  ${i}"
                                  }
          
                              }//end nodeListList.eachWithIndex 
                          }//end script
          
                          echo "end run for loop to run scripts on more node"
                      }
          
                  }
          

           

           def nodeListList = [ 
                       ["bf-01", "bf-02", "bf-03"],
                       ["bf-03", "bf-02", "bf-01"],
           ] 
          // update the 2D list , the pipeline will like below:
          

           

           

           

          // update the 2D list then will get different parallel pipeline
          def nodeListList = [
               ["bf-01", "bf-02", "bf-03", "bf-04"],
                ["bf-11", "bf-12", "bf-13", "bf-14"],
                ["bf-05", "bf-06"],
                ["bf-01", "bf-02", "bf-03", "bf-04", "bf-07", "bf-08"],
                 ["bf-gb-01", "bf-gb-02", "bf-gb-03", "bf-gb-04", "bf-gb-04","bf-gb-02","bf-gb-01"],
           ]
          

           

           

          bright.ma added a comment - - edited here is my workaround: I transform the task list to  2D list .   // code placeholder stage( '4. parallel' ) { steps { echo "will run for loop to run scripts on more node" script { def nodeListList = [ [ "bf-01" , "bf-02" , "bf-03" ], [ "bf-03" , "bf-02" , "bf-01" ], ] nodeListList.eachWithIndex { nodeList, i -> // loop the 2D list stage( "pre nodeList ${i}" ) { echo "pre stage nodeList: ${nodeList} ${i}" } stage( "loop nodeList ${i}" ) { def jobs = [:] nodeList.eachWithIndex { nodeName, j -> jobs[ "on_${nodeName}_${i}_${j}" ] = { node(nodeName) { stage( "run ${i}_${j}" ) { echo "${nodeName}-${i}-${j}, ${NODE_NAME}, ${env.WORKSPACE}" sh "sleep 60" } } //end node() } //end jobs[] } // end nodeList.eachWithIndex println " this jobs is: " + jobs parallel jobs } // end stage( "loop nodeList" ) stage( "post nodeList ${i}" ) { echo "post stage nodeList: ${nodeList} ${i}" } } //end nodeListList.eachWithIndex } //end script echo "end run for loop to run scripts on more node" } }   def nodeListList = [ [ "bf-01" , "bf-02" , "bf-03" ], [ "bf-03" , "bf-02" , "bf-01" ], ] // update the 2D list , the pipeline will like below:       // update the 2D list then will get different parallel pipeline def nodeListList = [ [ "bf-01" , "bf-02" , "bf-03" , "bf-04" ], [ "bf-11" , "bf-12" , "bf-13" , "bf-14" ], [ "bf-05" , "bf-06" ], [ "bf-01" , "bf-02" , "bf-03" , "bf-04" , "bf-07" , "bf-08" ], [ "bf-gb-01" , "bf-gb-02" , "bf-gb-03" , "bf-gb-04" , "bf-gb-04" , "bf-gb-02" , "bf-gb-01" ], ]    

          Jesse Glick added a comment -

          Do not use java.util.concurrent from (CPS-transformed) Pipeline script. It is not going to do what you think it does.

          One valid solution requiring no special plugins: https://github.com/jenkinsci/bom/blob/542369a68d4c8b604626b4dbc00a109cc8833836/Jenkinsfile#L47-L71

          Jesse Glick added a comment - Do not use java.util.concurrent from (CPS-transformed) Pipeline script. It is not going to do what you think it does. One valid solution requiring no special plugins: https://github.com/jenkinsci/bom/blob/542369a68d4c8b604626b4dbc00a109cc8833836/Jenkinsfile#L47-L71

          jerry wiltse added a comment -

          jglick sorry to bother but I'd like to clarify:

          Is it unsafe/misleading to use this plugin: https://github.com/jenkinsci/concurrent-step-plugin ?

          I know you didn't say exactly that, but I don't fully understand what you were saying, so I wanted to ask for clarification about this plugin directly.

          jerry wiltse added a comment - jglick sorry to bother but I'd like to clarify: Is it unsafe/misleading to use this plugin: https://github.com/jenkinsci/concurrent-step-plugin ? I know you didn't say exactly that, but I don't fully understand what you were saying, so I wanted to ask for clarification about this plugin directly.

          Jesse Glick added a comment -

          From a brief glance at https://github.com/jenkinsci/concurrent-step-plugin I would say that it is designed incorrectly (confuses “native” Java threads with “virtual” CPS VM threads) and should not be used. Most or all of its steps probably could be reimplemented correctly while using the same Pipeline script interface.

          Jesse Glick added a comment - From a brief glance at https://github.com/jenkinsci/concurrent-step-plugin I would say that it is designed incorrectly (confuses “native” Java threads with “virtual” CPS VM threads) and should not be used. Most or all of its steps probably could be reimplemented correctly while using the same Pipeline script interface.

          We're running into this again and would love to see it implemented.

          Jared Kauppila added a comment - We're running into this again and would love to see it implemented.

            Unassigned Unassigned
            teilo James Nord
            Votes:
            63 Vote for this issue
            Watchers:
            65 Start watching this issue

              Created:
              Updated: