-
New Feature
-
Resolution: Unresolved
-
Major
-
None
-
Jenkins ver. 2.63
-
Powered by SuggestiMate
So, I'm having this problem that I described in a similar bug for the lockable-resource plugin (JENKINS-45138). I said to myself, "oh, hey, I remember being able to throttle executions on a per-agent basis!"
Imagine my surprise when I hit the documentation and find that throttle is only applicable inside a step.
I need to acquire, use, and cleanup exclusive access to a resource on each agent. Will throttle work how I expect?
step('foo') { throttle(['foo-label']) bat '... acquire the resource...' bat '... use the resource...' } post { always { bat '... cleanup the resource...' } }
[JENKINS-45140] Add support of throttling of the entire build in Declarative Pipeline
Yeah, just do something like
pipeline { agent { label "whatever" } options { throttle(['some-category']) } ...
And it should work. =)
I'm going to try this immediately! Is this commonly supported by plugins that provide pipeline steps? Or did you have to specifically code support for this scope? I'm looking at the Throttle and Lock Resource plugins to try to solve the same problem and neither mention that you can use them in the options block.
Maybe we can get a documentation update?
Yeah, sounds like a doc update would be handy - any block-scoped step that doesn't require running on an agent will work in options, so, for example, throttle will work, timestamps will work, etc, but ws, which needs to already be on an agent, will not.
Oh, that's interesting... so, the options block is run on the server?
Oh, you can see everything that you can put in an options block using the Pipeline Syntax tool and choosing "properties". That snippet generator creates a huge mess, for some reason, though, like it can't isolate just the options you've checked.
I have a GitLab web hook sending a build on PUSH and MERGE (open merge request) and I have an open merge request, so I'm always getting 2 builds for every push... anyway, that's how I'm testing. So, I setup this throttle category:
- Name: foo
- Maximum Total Concurrent Builds: 0
- Maximum Concurrent Builds Per Node: 1
I tried using throttle in the properties like this, but it didn't work (both builds were scheduled).
options {
throttle(['foo'])
}
Then, using the pipeline syntax generator and choosing the properties step and checking the boxes, I tried this step with no success (both builds ran).
options { [$class: 'ThrottleJobProperty', categories: ['foo'], limitOneJobWithMatchingParams: false, maxConcurrentPerNode: 0, maxConcurrentTotal: 0, paramsToUseForLimit: '', throttleEnabled: true, throttleOption: 'category'] }
Anyway, it's extremely confusing what is and is not supported. The README says you can use throttle to throttle the entire job from the job properties (that's what I want), but recommends against it.
I'm not savvy enough to tell if throttle meets these conditions, per INFRA-1053
https://gist.github.com/abayer/804cb7ab9a251c43481a720ab215c99e
Maybe I have something setup incorrectly and I need to test with completely separate job configurations both participating in the same throttle category. I thought it was a Jenkins default to throttle the same build to 1 instance at a time and I haven't changed anything, but it definitely ran the push and merge events simultaneously.
I'm seeing the same issue as anthonymastrean. What I expect to happen is a node to effectively be reserved for whichever job acquires it during throttle. What appears to be happening is jobs without throttle categories defined ignore jobs using throttle.
I'm trying to throttle declarative Jenkins jobs per category.
Using
options {
throttle(['foo'])
}
had no effect for me neither for throttling max. builds per node nor max. total builds.
The only throttling that worked for declarative Jenkins Jobs is throttling the total number of concurrent builds and configuring it for the job in the GUI.
Throttling the total number of builds per node via the GUI also didn't have any effect for me.
I'm using Jenkins 2.89 + Throttle Concurrent Builds Plug-in 2.0.1
Update: Took me some hours to find it out, would be great to have some examples fro declarative pipelines + explanation in the plugin doc.
So throttling works for parts that are enclosed with node{} blocks in your declarative pipeline. When parts are throttled like this they still use up an executor with only waiting for being allowed to run.
For Example if you have a node max. 10 executors and you throttle your job to use max 2. Executors, then run the job 10 times in parallel, 10 executors will be consumed, 8 executors waiting for being allowed to run.
We need this as well.
The Pipeline Syntax Generator tool will spit out a blob of code like this:
properties( [ $class: 'ThrottleJobProperty', categories: ['my_category'], limitOneJobWithMatchingParams: false, maxConcurrentPerNode: 1, maxConcurrentTotal: 0, paramsToUseForLimit: '', throttleEnabled: true, throttleOption: 'category' ] )
But then trying to run it in Jenkins throws this error:
WorkflowScript: 4: The properties section has been renamed as of version 0.8. Use options instead.
I don't know how to convert this to use 'options', or if it's even supported. Basically this setting appears to be incompatible with Pipelines after version 0.8.
jayspang I got this to work by placing the block outside of my pipeline like this:
// Do NOT place within the pipeline block properties([ parameters([ choice(name: 'fooBar', choices: 'foo\nbar', description: 'Would you like foo or bar?', defaultValue: "foo"), ]), [ $class: 'ThrottleJobProperty', categories: ['my_category'], limitOneJobWithMatchingParams: false, maxConcurrentPerNode: 1, maxConcurrentTotal: 0, paramsToUseForLimit: '', throttleEnabled: true, throttleOption: 'category' ], ]) pipeline { agent { label "example" } environment { EXAMPLE = credentials('example-creds') } stages { stage('Fun') { } } post { always { cleanWs() } } // If you place a properties block here you'll get the convert to options error options { timeout(time: 60, unit: 'MINUTES') } }
kylejameswalker it looks like that worked. I guess the plugin doesn't fully support declarative pipelines yet?
I was able to put this snippet in the `options { }` block and it didn't throw an error:
options {
throttle(categories: ['my_category'])
}
Unfortunately, the `maxConcurrentPerNode` property is the one I actually need, and it threw an error when I added it. Perhaps this particular property doesn't work with declarative.
jayspang - the maxConcurrentPerNode property goes on the category you define in the Jenkins global config, and then you use that category in the Pipeline.
- I tried the step you suggested. Added throttle settings in manage jenkins. Added throttle(categories: ['test-category']) in options. Also added the code suggested by kylejameswalker. But the number of builds are not getting throttled.
Hello, I'm having the same problem:
options {
throttle(['shared-workspace'])
}
doesn't work for the entire build in a declarative job.
As a workaround, I'm using shared libraries:
1) define the entire declarative pipeline in a shared library var, as per https://jenkins.io/blog/2017/10/02/pipeline-templates-with-shared-libraries/
2) surround pipeline invocation with throttle, e.g.
Jenkinsfile throttle(['shared-workspace']) { myDeliveryPipeline { branch = 'master' scmUrl = 'ssh://git@myScmServer.com/repos/myRepo.git' email = 'team@example.com' serverPort = '8080' developmentServer = 'dev-myproject.mycompany.com' stagingServer = 'staging-myproject.mycompany.com' productionServer = 'production-myproject.mycompany.com' } }
Hope this will be helpful to someone!
I had to revert back to a scripted pipeline in order to get throttling to work as intended. The issue seems to be that the throttle option in a Declarative pipeline occurs on a particular agent node, and it's either too late – the executor is already consumed – or it's simply a no-op, but in any case, I'm not seeing any actual evidence of throttling, either overall or per node.
pipeline { agent { label 'foo' } options { throttle(categories: ['MyCategory']) // <-- doesn't do anything, even with MyCategory defined in the system configuration } stages { // ... } }
What I would have wanted is either to have the throttle option take effect before the agent is selected, or some sort of declarative syntax that would let me have agent none at the top level of the pipeline (where the throttle option is specified), then the ability to specify a particular agent for all the actual stages to be executed (Build, Test, Publish, etc.). I tried this:
pipeline { agent none // <-- to avoid having an agent allocated before the throttle category is evaluated options { throttle(categories: ['MyCategory']) } stages { stage('Overall') { agent { label 'foo' // <-- to get the real agent I want for the real stages } stages { stage('Build') { // ... } stage('Test') { // ... } } } } }
But it simply hung with no useful output. If there's some other way to have agent none at the top level, then a single, reused agent for the actual stages within, I'd be interested to hear about it. I know I could have an agent per stage (Build, Test, etc.), but that wouldn't work for me – it has to be the same machine for all of these, and I'd rather not stash and unstash all the artifacts I need between these stages.
Hi,
Any update with this issue?
The property "maxConcurrentPerNode" is not working
Thanks
I'm seeing what guybanay sees. I'm now testing a Declarative Pipeline workaround as kylejameswalker suggested – namely, putting the properties block outside the pipeline altogether – and it works, but it only honors the maxConcurrentTotal property, not the maxConcurrentNode one. Very simple repro (assuming a "ThrottleTest" category is configured globally):
properties([ [ $class: 'ThrottleJobProperty', categories: ['ThrottleTest'], throttleEnabled: true, throttleOption: 'category' ], ]) pipeline { agent any stages { stage('Long-running') { steps { input message: 'Shall we continue?' echo "Thanks! Continuing." } } } }
If I configure a maximum of 5 concurrent builds across all nodes, but only 1 per node, the only limit enforced is the total limit; a single build agent can still run as many builds of this job as it has executors. Only the maxConcurrentTotal takes effect.
I tried metamilks creative suggestion but on more recent versions this is explicitly prohibited:
WorkflowScript: 2: pipeline block must be at the top-level, not within another block. @ line 2, column 5. pipeline { ^
Demo pipeline:
throttle(['myThrottle']) { pipeline { agent { label 'mylabel' } stages { stage('first') { steps { sleep 60 } } } } }
I've tested many other variants and I claim it is currently (with latest versions) impossible to get node throttling on a declarative pipeline. If someone has a counterexample I would appreciate that.
marcus_phi The secret sauce is having your pipeline defined in a shared library.
vars/myPipeline.groovy
def call() { pipeline { agent { label 'mylabel' } stages { stage('first') { steps { sleep 60 } } } } }
jobs/my-pipeline/Jenkinsfile
throttle(['myThrottle']) {
myPipeline()
}
Je-s F-ng C-st!
It's insane that that would make a difference but it does.
So the 'pipeline block must be at the top-level' check is just on a file (textual) basis, not on the actual code structure.
Thanks metamilk! You made my day!
I honestly did not have much hope when I saw this issue having been updated in my e-mail this morning.
marcus_phi, I'll second your comment/excitement on finally having a solution to this mystery.
metamilk, thank you so much!
Our team already has our declarative and scripted pipelines largely live in libraries. I gave a couple attempts to implement the throttle as described by metamilk and here is a little more on how I think I'll `throttle` in our builds by renaming sharedPipeline to sharedPipelineInner and having sharedPipeline wrap the throttle category around the sharedPipeline() call. This way, I don't need to update 100+ repositories * number of branches.
The current Jenkinsfile looks like this more or less...
// library imports
sharedPipeline()
vars/sharedPipeline.groovy
def call() {
throttle(['throttle-category']) {
sharedPipelineInner()
}
}
vars/sharedPipelineInner.groovy
def call() { pipeline { agent { label 'mylabel' } stages { stage('first') { steps { sleep 60 } } } }
I did try to simply shift the pipeline config into a def within the original file, but that did not work.
This defect asks for throttle within a step, which this solution does not provide, so I guess it will need to stay open. The docs for the plugin should be updated, for sure, with an example like this.
Edit - Then again, the title says entire build so maybe this solution does apply and the example in the body does not. anthonymastrean, can you comment?
What seems to work fine for me in a declarative pipeline is the following (syntax taken from here: https://github.com/jenkinsci/throttle-concurrent-builds-plugin/pull/68)
pipeline { agent any options { throttleJobProperty( categories: ['test_3'], throttleEnabled: true, throttleOption: 'category', ) } stages { stage('sleep') { steps { bat "sleep 60" echo "Done" } } } }
Hope that helps.
Your example chrop works for the used of maxConcurrentTotal, but not for maxConcurrentPerNode.
Got the same trouble here !
As a workaround, here is what we used :
Label for a master : main-server
Label for specific slave : slave-node
And corresponding configuration for Multi-Project Throttle Categories name :
pipeline { agent{ label 'main-server' } options { throttleJobProperty( categories: ['main-server'], throttleEnabled: true, throttleOption: 'category', ) } stages { stage('sleep') { steps { throttle(['slave-node']) { node('slave-node') { sh "sleep 60" echo "Done" } } } } } }
I can confirm that when running a (Scripted or Declarative) Pipeline job whose ThrottleJobProperty has throttleOption set to category and whose corresponding ThrottleCategory has a positive value of maxConcurrentPerNode, ThrottleCategory#maxConcurrentPerNode is not respected (as demonstrated in ThrottleJobPropertyPipelineTest#onePerNode). Note that ThrottleCategory#maxConcurrentPerNode is respected for Freestyle jobs (as demonstrated in ThrottleJobPropertyFreestyleTest#onePerNode) and for (Scripted) Pipeline jobs that use ThrottleStep rather than ThrottleJobProperty (as demonstrated in ThrottleStepTest#onePerNode).
Also note that unlike ThrottleCategory#maxConcurrentPerNode, ThrottleCategory#maxConcurrentTotal works in all of the above use cases (as demonstrated in ThrottleJobPropertyPipelineTest#twoTotal, ThrottleJobPropertyFreestyleTest#twoTotal, and ThrottleStepTest#twoTotal).
Implementing support for this use case would be challenging and would likely require significant knowledge of Pipeline internals (which I do not have). I suspect it would require either (a) making Pipeline: Declarative and ThrottleStep simpatico with each other (not easy) or (b) making the logic in ThrottleQueueTaskDispatcher#throttleCheckForCategoriesOnNode that calculates runCount support Pipeline jobs that use ThrottleJobProperty rather than ThrottleStep (also not easy).
Thanks for that. That's very helpful.
ThrottleCategory#maxConcurrentTotal does that take into account the nodes? As in, if 1 is running, does it matter that another one is about to be scheduled regardless of the settings on the nodes?
We're mostly trying to rely on maxConcurrentTotal, as we don't care about individual nodes, we just care that there is only a max 1 running within this category. we're unable to get this working. All jobs in the queue seem to be added to an executor as soon as the current running job is finished.
Meanwhile, we're relying on provisioning a single node for the jobs that need to go in a single category (i.e. we see a node as a category).
So the only option to use #maxConcurrentPerNode currently in the declarative pipeline is wrapping the entire pipeline into a shared library and then use it as was suggested above?
how in this scenario do you keep versioning of the pipeline without keeping it in branches?
I'm using declarative pipelines and ran into this same problem today. Throttle by node works, but there's no good way to run post steps on the same node that's throttled, like for doing cleanup and uploading junit results.
I'm a bit of a Jenkins newbie and the whole shared library stuff is a bit daunting for me.
I can confirm that when running a (Scripted or Declarative) Pipeline job whose ThrottleJobProperty has throttleOption set to category and whose corresponding ThrottleCategory has a positive value of maxConcurrentPerNode, ThrottleCategory#maxConcurrentPerNode is not respected (as demonstrated in ThrottleJobPropertyPipelineTest#onePerNode). Note that ThrottleCategory#maxConcurrentPerNode is respected for Freestyle jobs (as demonstrated in ThrottleJobPropertyFreestyleTest#onePerNode) and for (Scripted) Pipeline jobs that use ThrottleStep rather than ThrottleJobProperty (as demonstrated in ThrottleStepTest#onePerNode).
There seem to be a two things at play preventing this:
1. the method to get the job's throttle settings only works on instances of hudson.model.Job, but for pipelines (at least Scripted pipelines) the task is actually a
ExecutorStepExecution$PlaceholderTask. It is easy to add access to the throttle property by checking for this type and getting the property from task.getOwnerTask()
2. the logic to count whether a running task is equal to a queued task needs to be expanded to include pipelines. The task passed to ThrottleQueueTaskDispatcher#buildsOnExecutor is a PlaceholderTask when checking the job's properties and equality can be checked with
task.getOwnerTask().equals(currentExecutable.getParent().getOwnerTask())
but when checking based on category, the task passed is the actual WorkflowJob, and then this equality check works:
task.equals(currentExecutable.getParent().getOwnerTask())
but, then pipelines get counted twice because they're counted both on a flyweight executor and regular executors, even before the job starts. I don't know what flyweight executors are or, more importantly, why they're being counted. They don't appear to come into play for freestyle jobs. On my local fork where I've been experimenting, I'm tempted to just skip counting flyweights for these two cases, and then the per node limits seem to work as you'd expect them too. Obviously the flyweight counts are there for a reason, though. Those equalities above are also from trial and error so they may not be comparing the right objects conceptually.
I could open a PR that does all this, but since I only use scripted pipelines, it's very likely what I'm doing is not type safe and will break for either other job types or other uses of this plugin.
gpaciga that sounds right at a high level. I only use scripted Pipelines myself as well. I don't know too much about flyweight tasks, but I know they are used in Matrix Projects as well as in the portions of Pipeline jobs that run outside of a node block (typically on the controller's built-in node). The current behavior for flyweight executors was added well before my time in jenkinsci/throttle-concurrent-builds-plugin#23 and appears to be a workaround for JENKINS-24748, which relates to the Build Flow plugin (which itself is no longer supported). It's hard for me to answer definitively whether that flyweight task logic is still needed for Pipeline, though maybe someone else on the jenkins-dev mailing list might have more context. The most important thing for a change of this nature is to get plenty of real-world testing done prior to release. There are 39 people watching this issue, so I'm sure we could convince some of them to install and test an incremental build if you filed a PR. Also note that the documentation regarding what is and isn't supported in this plugin is woefully out-of-date and would need to be updated as well.
I think there is a case where a pipeline job is occupying a flyweight but not a regular executor, then, and it won't be counted towards the limit.
This is what I have so far: https://github.com/gpaciga/throttle-concurrent-builds-plugin/tree/jenkins-45140-throttle-pipeline-per-node
All the changes are confined to ThrottleQueueTaskDispatcher.java. Tests break, but I can't easily identify why (I don't have any experience with writing plugin tests).
A promising start. I think you're starting to see just how hard fixing this issue really is. The fundamental problem is, as you wrote, differentiating between cases where a Pipeline job is executing logic outside of a node block (i.e., on a flyweight executor) vs when it is executing logic inside of a node block (i.e., on a regular executor). In Freestyle jobs, this problem does not exist. Note that the throttle step solves this problem in ThrottleJobProperty.getThrottledPipelineRunsForCategory by doing accounting to keep track of whether the Pipeline is or isn't in a (throttled) node block. It's not entirely clear to me how such accounting could be implemented outside of a custom step, which is why I wrote earlier that going down this route "would likely require significant knowledge of Pipeline internals".
Feel free to open a PR if you want to keep discussing the technical details of this issue without spamming everyone who is watching this Jira ticket.
Likely the Throttle Job property will work, or not... If no, it is a feature request, not a defect. declarative Pipeline is a simplified syntax by design, and such use-cases need to be added there by Declarative plugin maintainers. CC abayer