-
Bug
-
Resolution: Unresolved
-
Blocker
-
None
-
Powered by SuggestiMate
There is partial fix for this for Declarative pipelines in pipeline-model-definition-plugin v1.4.0 and later, significantly improved in v1.8.4. Due to the extent to which it change how pipelines are executed it is turned off by default. It can be turned on by setting a JVM property (either on the command-line or in Jenkins script console):
org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true
As noted, this still works best with a Jenkinsfile with pipeline directive as the only root item in the file.
Since v1.8.2 this workaround reports an informative error for pipelines using `def` variables before the pipeline directive. Add a @Field annotation to those declaration.
This workaround generally does NOT work if the pipeline directive inside a shared library method. If this is a scenario you want, please come join the pipeline authoring SIG and we can discuss.
Please give it a try and provide feedback.
Hi,
We are getting below error in Pipeline which has some 495 lines of groovy code. Initially we assumed that one of our methods has an issue but once we remove any 30-40 lines of Pipeline groovy, this issue is not coming.
Can you please suggest a quick workaround. It's a blocker for us.
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
General error during class generation: Method code too large!
java.lang.RuntimeException: Method code too large!
at groovyjarjarasm.asm.MethodWriter.a(Unknown Source)
at groovyjarjarasm.asm.ClassWriter.toByteArray(Unknown Source)
at org.codehaus.groovy.control.CompilationUnit$16.call(CompilationUnit.java:815)
at org.codehaus.groovy.control.CompilationUnit.applyToPrimaryClassNodes(CompilationUnit.java:1053)
at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:591)
at org.codehaus.groovy.control.CompilationUnit.processPhaseOperations(CompilationUnit.java:569)
at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:546)
at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:298)
at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:268)
at groovy.lang.GroovyShell.parseClass(GroovyShell.java:688)
at groovy.lang.GroovyShell.parse(GroovyShell.java:700)
at org.jenkinsci.plugins.workflow.cps.CpsGroovyShell.reparse(CpsGroovyShell.java:67)
at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.parseScript(CpsFlowExecution.java:410)
at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.start(CpsFlowExecution.java:373)
at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:213)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:410)
1 error
at org.codehaus.groovy.control.ErrorCollector.failIfErrors(ErrorCollector.java:310)
at org.codehaus.groovy.control.CompilationUnit.applyToPrimaryClassNodes(CompilationUnit.java:1073)
at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:591)
at org.codehaus.groovy.control.CompilationUnit.processPhaseOperations(CompilationUnit.java:569)
at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:546)
at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:298)
at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:268)
at groovy.lang.GroovyShell.parseClass(GroovyShell.java:688)
at groovy.lang.GroovyShell.parse(GroovyShell.java:700)
at org.jenkinsci.plugins.workflow.cps.CpsGroovyShell.reparse(CpsGroovyShell.java:67)
at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.parseScript(CpsFlowExecution.java:410)
at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.start(CpsFlowExecution.java:373)
at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:213)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:410)
Finished: FAILURE
- is duplicated by
-
JENKINS-50033 Method code too large using declarative pipelines
-
- Closed
-
-
JENKINS-72290 Encountering method too large error
-
- Closed
-
- is related to
-
JENKINS-61389 Pipeline Matrix return a "Method code too large!" on a really short pipeline
-
- Closed
-
-
JENKINS-56500 Declarative pipeline restricted in code size
-
- Reopened
-
-
JENKINS-64846 Pipeline with Matrix doesn't see variables outside pipeline block
-
- Resolved
-
-
JENKINS-61389 Pipeline Matrix return a "Method code too large!" on a really short pipeline
-
- Closed
-
- links to
[JENKINS-37984] org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed: General error during class generation: Method code too large! error in pipeline Script
bitwiseman This really does fix it.
The only way I can easily test this right now is with the jenkins/jenkinsfile-runner docker image. However, I can tell just by switching the current version the plugins with the ones in your PR and setting SCRIPT_SPLITTING_TRANSFORMATION, the jenkinfile pipeline that was too large was able to work.
henryborchers
Excellent!
I'm hoping for feedback from more folks such as brianjmurrell before I release this.
I don't have a pipeline exhibiting this problem any more, since my last occurrence and the refactoring[1] I did to resolve it. That may not last for long though, as new stages are always being added. I can't say how soon that will be though.
Ultimately, does this further enhancement of SCRIPT_SPLITTING_TRANSFORMATION still result in a wall where the Jenkinsfile can be once again too big, or does this new mechanism just split as much as is necessary to accommodate any size Jenkinsfile?
Could the change here make things any worse? If not, going forward with it is a wash at worst then, yes?
[1] This time it was moving multi-condition when clauses into functions to simplify the when blocks – causing more unnecessary indirection, IMHO. Reading my Jenkinsfile is an exercise in jumping all around the Jenkinsfile (to see the value of functions used solely to reduce the pipeline block size, not implement any DRY) file and back and forth between repos (pipeline libraries), etc., which is very annoying).
brianjmurrell
There will always be a wall. The limitations in class size are hard coded into the Java Class file format.
However, this improvement moves the the wall exponentially further out - similar to going from 16-bit integer to 32-bit integer. It is a massive improvement.
Even if you are not encountering the issue currently, it would be helpful if you tried this new version to make sure it didn't break anything. Further, you could try reverting the last change that you made to your Jenkinsfile to mitigate this and see if it still works. The only change you might need to make is adding "@Field" to script local variable declarations (def varName="value" in the root of the script).
I don't have anything to revert. I'd never commit a Jenkinsfile that doesn't run in Jenkins. I wouldn't have the approvals to land such a patch.
So the last time I ran into this, was when I added a stage or two but in the same commit I also refactored to allow the new stage(s) to fit.
I'm also not sure when my priorities at my day job will allow me time to stand up a non-production Jenkins server to try this out in. When I do find the time, I will be sure to update here.
I thought I'd add that I tested these changes with my skeleton script that reproduced the error for us and it seems to be working. I also can't make these changes to our main Jenkins instance, but I used my docker setup that I have for reproducing errors.
Previously, I had narrowed down the cause for us to be the number of stages with "when" conditionals. When we get somewhere between 30 and 35 stages with "when" expressions, the error shows up, regardless of any other code in the pipeline (I was able to reproduce with a blank pipeline library with just echo lines).
I installed the plugins and activated SCRIPT_SPLITTING_TRANSFORMATION, and now I've been able to run the same script with 60 stages without hitting the error. I might be able to go higher, but our use case is far from hitting that many stages.
I do want to say thanks for keeping this issue active. We've been running a workaround script for a while now but I've been keeping my eye on progress on this issue, and it looks promising so far. I'm anxious to get back to a pure declarative implementation.
Previously, I had narrowed down the cause for us to be the number of stages with "when" conditionals. When we get somewhere between 30 and 35 stages with "when" expressions, the error shows up, regardless of any other code in the pipeline (I was able to reproduce with a blank pipeline library with just echo lines).
By suspicion here is that the complexity of the when conditions adds to the amount of bytecode generated, contributing to the Method code too large situation. I moved all of my multi-condition tests into functions so that all of my when conditions are a single call to the function wrapping their actual multi-condition tests.
I'm anxious to get back to a pure declarative implementation.
Indeed. Without unnecessary indirection through functions that have no DRY purpose whatsoever and exist solely to reduce the size of the Method code.
I have couple questions about workarrounds:
- I saw that many recommend using sharing libraries. How it is different from using functions from the same file but outside of pipelines{} section?
- Some also sugested me that separating functions in Jenkinsfile works only if you wrap arround pipeline{} section with with call() function like this - call(){pipeline{...}}. Is it true?
- Is it me or using matrix{} greatly rises the risk of getting such error? I mean it seems to me that i can have much larger pipelines when im not using them. Or is it because i use when{} more?
- Does things like number of variables, maps (arrays) or objects defined outside of pipeline script have an impact to this problem?
- Some say that using scripting (imperative) syntax does not trigger this problem. Ive never use it. Is it worth to learn it and introduce it in projects?
Im asking about those because I really hesitate to use share library solution. Most of my functions are not universal and doesnt make sense for any other projects. Also i use mutlibranch jobs a lot and cant imagine how static libs can work with dynamic branches when building process is strictly co-related with development process (Jenkinsfile changes with code development) and thus cant be separated. Change in code would have to reflect also in shared library. For example when developers add new compilation target, new matrix axis is being added to Jenkinsfile. And sometimes new section. How would this work in multibranch environment and shared library soultion where some branches work with new Jenkinsfile and some still have to be build old way ?
1. I saw that many recommend using sharing libraries. How it is different from using functions from the same file but outside of pipelines{} section?
The underlying code is completely different. For example, functions in the same are internally part of the class for that script, whereas shared library functions are in their own classes.
2. Some also sugested me that separating functions in Jenkinsfile works only if you wrap arround pipeline{} section with with call() function like this - call(){pipeline{...}}. Is it true?
I have no idea what syntax you are referring to. Do you mean putting the pipeline in a shared library?
3. s it me or using matrix{} greatly rises the risk of getting such error? I mean it seems to me that i can have much larger pipelines when im not using them. Or is it because i use when{} more?
No, matrix doesn't cause this, it only makes it easier to run into this. If you create a the same pipeline manually as what is generated using matrix, you'd get the same issue. But you would also have much longer and repetitive Jenkinsfile.
4. Does things like number of variables, maps (arrays) or objects defined outside of pipeline script have an impact to this problem?
Those things do not cause this problem, but their presence can make it harder for the declarative engine to mitigate this problem.
5. Some say that using scripting (imperative) syntax does not trigger this problem. Ive never use it. Is it worth to learn it and introduce it in projects?
This is false. Scripted pipeline syntax can also encounter this issue, but it is less common due to there not being an extra layer like there is in Declarative. However, when scripted pipeline do encounter this problem, it is purely up to the writers of that script to workaround the problem. In Declarative, I have been able to process the pipeline code to transparently workaround the issue in many cases (with SCRIPT_SPLITTING_TRANSFORMATION).
Greetings,
Getting the error by the the sheer amount of "when" in pipeline.
Test pipeline with 35 booleanParam and 35 stages with " when {expresssion {return{params.Foo}}
I tested Jenkins 2.235.5 and plugins in version 1.7.1.
I installed
https://repo.jenkins-ci.org/incrementals/org/jenkinsci/plugins/pipeline-model-api/1.7.3-rc1872.9504c794d213/
https://repo.jenkins-ci.org/incrementals/org/jenkinsci/plugins/pipeline-model-definition/1.7.3-rc1872.9504c794d213/
https://repo.jenkins-ci.org/incrementals/org/jenkinsci/plugins/pipeline-model-extensions/1.7.3-rc1872.9504c794d213/
https://repo.jenkins-ci.org/incrementals/org/jenkinsci/plugins/pipeline-stage-tags-metadata/1.7.3-rc1872.9504c794d213
then run
org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true
and getting the new error:
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed: General error during semantic analysis: SCRIPT_SPLITTING_TRANSFORMATION is incompatible with local variable declarations. Add the the '@Field' annotation to local variable declarations: org.codehaus.groovy.ast.expr.DeclarationExpression@26fafdbf[org.codehaus.groovy.ast.expr.VariableExpression@49600128[variable: failedStages]("=" at 1:1: "=" )org.codehaus.groovy.ast.expr.ListExpression@5b30fe0e[]].
We use the declarative pipeline and our main CI pipeline is close to 800 lines with 30 parallel stages all with when clauses.
Since we use Kubernetes each stage spins its own pod and we have a shared jenkins library to allow to simplify the pod definitions as well as running the individual steps.
The SCRIPT_SPLITTING_TRANSFORMATION flag did nothing noticeable.
As a workaround we tried to group our parallel stage to share the when statements but nesting parallel stages inside parallel stages is not allowed.
Short of creating a sub-pipeline per parallel group I'm not really seeing a way out of this problem. This is annoying since it will probably add a couple of minutes to our pipelines and we'll have to track and copy test results files between pipelines.
This seem to be a very big design flaw of the declarative pipelines where the jvm limitations are impacting the ability to use a DSL.
In the absolute short term we will stop creating more parallel stages which will slow down the productivity of our engineering organization.
Considering this bug is several years old and seems to impact a lot of organizations, it would be good if the documentation could inform about this problem and warn about what the limits are when using declarative pipelines.
Please upgrade to at least v1.8.3 or greater and try the feature flag in the description before commenting on this issue.
jenkinsneveragain
Did you try what the error suggested? It is pretty specific.
sodul
I'm surprised script splitting had no effect.
Your pipeline still in your Jenkinsfile, right?
And pipeline is the only thing declared in your Jenkinsfile?
Could you try this again with the latest release?
bitwiseman I missed the version requirement. We have:
- pipeline-build-step:2.13
- pipeline-github-lib:1.0
- pipeline-graph-analysis:1.10
- pipeline-input-step:2.12
- pipeline-milestone-step:1.3.1
- pipeline-model-api:1.7.2
- pipeline-model-definition:1.7.2
- pipeline-model-extensions:1.7.2
- pipeline-rest-api:2.18
- pipeline-stage-step:2.5
- pipeline-stage-tags-metadata:1.7.2
- pipeline-stage-view:2.18
- pipeline-utility-steps:2.6.1
We are on Jenkins 2.263.3 LTS and we are encountering an other issue that prevents any job from starting when we update some random plugins. So I'm a little worried about upgrading until JENKINS-64727 is addressed.
I will try to upgrade over the weekend to minimize potential outages for our internal developers, since the bug is random and not reliably reproducible on other instances.
As far as the pipeline is concerned we start it with this:
Map pr_focus = [:] String prepare_uuid = UUID.randomUUID().toString().take(8) pipeline { agent none stages { stage ('Prepare') { agent { kubernetes { label "prepare-ci-${prepare_uuid}"
This uuid is for the kubernets plugin later on since our agent definitions need to have a guaranteed unique id. We can probably get a uuid from our library though.
The Map is a list of stage groups that we enable/disable based on what files have changed. This allow us to skip stages for our PRs if the tests would not be relevant based on the diff. For example if only Python code has changed we don't need to run Golang unittests.
steps { script { prepare() sh "jenkins/pr_changes.sh" container('python') { sh "jenkins/pr_focus.py > pr_focus.txt" } pr_focus = readProperties(file: 'pr_focus.txt') echo "pr_focus: ${pr_focus}" } }
Then later:
stage('Go vet') { when { not { equals expected: '1', actual: pr_focus.SKIP_GO_STAGES } beforeAgent true }
I think we had to declare the map at the top level to ensure the values would be available to all stages, but if you have a recommendation on an other approach we are open to try that.
Ah, I see. The reason script splitting didn't work was because it silently disabled itself when it saw any other expressions in the Jenkinsfile outside of pipeline.
The new version v1.8.2 allows other expressions, but not bare variable declarations and will throw an informative error rather than silently attempt to continue running with script splitting disabled. In v1.8.2 with script splitting enabled, variable declarations such as Map pr_focus = [:] and String prepare_uuid = UUID.randomUUID().toString().take(8) need to have @Field annotation added to them.
So, your Jenkinsfile would look like:
@Field
Map pr_focus = [:]
@Field
String prepare_uuid = UUID.randomUUID().toString().take(8)
pipeline { ... }
bitwiseman, sorry for bothering you
currently I have version 1.8.2, is that mean that SCRIPT_SPLITTING_TRANSFORMATION flag is enabled by default?
experimental feature that could be activated by setting SCRIPT_SPLITTING_TRANSFORMATION=true
So, I suspect it is should be disabled by default?
Currently I'm able to use declared variables outside of `pipeline` block for all stages,
except these ones which are in `matrix` definition (for these I used `@Field`), that's weird. Is it expected behavior?
Any recommendation of defining global variables (strings, maps) for Declarative pipelines (in case of some var should be used by several stages)?
Paweł Did you try what the error suggested? It is pretty specific.
No, I was not sure and I was testing it in the evening on the production so I've had moved to another workaround quickly.
https://code-held.com/2020/01/22/jenkins-local-shared-library/
I test it locally and then when implenitng it on prod I've had noticed a method displaying jenkins build status ( build abc is OK).
I've removed it and replaced by jenkins build in things and testing team is not complying to me on the missing "status OK" method so far
def failedStages = [] <-- I removed it pipeline { agent none
failedStages.add(env.FAILURE_STAGE)
#removed stage('Results') { steps { script { if (failedStages.isEmpty()) { echo("${env.JOB_NAME} - OK") } else { echo(abc.getMessage(failedStages)) } } } }
mattermostNotify(currentBuild.result, abc.getMessage(failedStages), 'ABC')that
replaced by
mattermostNotify("${currentBuild.currentResult}", "Build failed at stage: ${env.FAILURE_STAGE}\nReason: ${env.FAILURE_REASON}", ABC')
moskovych
Yes, it is disabled by default.
jenkinsneveragain
I'm not sure I understand what you're doing there, but it seems unrelated to this issue.
The error said: "Add the '@Field' annotation to local variable declarations" . Is there some other way this could be said that would be more clear?
bitwiseman, ok, so, can you explain this please:
I'm able to use declared variables outside of `pipeline` block for all stages,
except these ones which are in `matrix` definition (for these I used `@Field`).
Is matrix has different logic?
And again: Any recommendation of defining global variables (strings, maps) for Declarative pipelines (in case of some var should be used by several stages)? Documentation?
bitwiseman After adding @Field we got:
00:00:04.555 WorkflowScript: 42: unable to resolve class Field , unable to find class for annotation
With the following plugins:
- pipeline-build-step:2.13 - pipeline-github-lib:1.0 - pipeline-graph-analysis:1.10 - pipeline-input-step:2.12 - pipeline-milestone-step:1.3.2 - pipeline-model-api:1.8.3 - pipeline-model-definition:1.8.3 - pipeline-model-extensions:1.8.3 - pipeline-rest-api:2.19 - pipeline-stage-step:2.5 - pipeline-stage-tags-metadata:1.8.3 - pipeline-stage-view:2.19 - workflow-aggregator:2.6 - workflow-api:2.40 - workflow-basic-steps:2.22 - workflow-cps:2.87 - workflow-cps-global-lib:2.17 - workflow-durable-task-step:2.36 - workflow-job:2.40 - workflow-multibranch:2.22 - workflow-scm-step:2.11 - workflow-step-api:2.23 - workflow-support:3.7
Am I missing something? Do you have a full example of a declarative pipeline that uses the `@Field` annotation?
sodul, in my case I will needed to add one `import` on the top of file to be able to use it:
import groovy.transform.Field
and then define this annotation:
@Field Map dockerParameters = [...]
Thanks moskovych it worked perfectly!
bitwiseman to answer your question about how to handle the error message better. I recommend you put an explicitly spelled out example of a pipeline with the @Field notation and the required import in the documentation as many of us are not groovy experts. The error message should contain a short link to the documentation so we can clearly see how to implement the workaround.
bitwiseman we ran into a bit of an issue, which was a facepalm for me in insight. Adding the @Field annotation worked well but now the other branches (we have hundred of branches) that do not have the new annotation are failing.
I was thinking that the new flag could be behaving in a backward compatible mode. Instead of plan out failing when the @Field notation is missing you could write a warning and fallback onto the existing behavior. This way all Jenkinsfiles that were not priorly failing will keep on working.
moskovych
You'll need to provide an example.
sodul
Thanks for the feedback. In the final version, I'll definitely do that.
You can set "org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_ALLOW_LOCAL_VARIABLES=true" . Your fixed pipeline that uses "@Field" will still use the newer/better script splitting and the other pipelines will start working again. FYI, I know this is annoying but it had to be done this way. People were complaining that "script splitting isn't working" without taking the time to read that it doesn't work with local declared variables. This way anyone not using local declared variables (which are not recommended anyway) gets the best possible behavior and any who is using them gets clear feedback about their choices. That feedback needs improvement but it is better than silently not doing what the user has asked for by providing this flag.
bitwiseman
Some of our pipelines, on an other Jenkins instance are calling other pipelines. Since we need to pass along parameters we have variables such as this before the `pipeline {}` section.
@Field List parameters = [ gitParameter(name: 'BRANCH', value: params.BRANCH), booleanParam(name: 'SKIP', defaultValue: false) ]
We then have several stages that get the parameters passed around.
when { expression { params.SKIP == false } } steps { build job: 'other', propagate: true, wait: true, parameters: parameters }
Unfortunately we get an exception thrown apparently on params:
groovy.lang.MissingPropertyException: No such property: params for class: groovy.lang.Binding
We tried using `env` but that does not seem to be available either.
This is not something we can easily move to our shared library since the list of parameters is specific to each of these piplines
bitwiseman, ok, here is small example of my pipeline:
#!/usr/bin/env groovy //library("jenkins_shared_library@1.0.0") //@groovy.transform.Field String resourcePrefix = new Date().getTime().toString() //@groovy.transform.Field Map dockerParameters = [ registry: "docker.example.com", registryType: "internal", images: [ image1: [image: "image1", dockerfile: "Dockerfile1"], image2: [image: "image2", dockerfile: "Dockerfile2"] ] ] pipeline { agent any options { skipDefaultCheckout true } parameters { booleanParam defaultValue: true, description: 'Build & Push image1', name: 'image1' booleanParam defaultValue: true, description: 'Build & Push image2', name: 'image2' } stages { stage("Prepare") { options { skipDefaultCheckout true } failFast true parallel { stage('Test1') { steps { // All variables available in simple stages and parallel blocks echo "resourcePrefix: ${resourcePrefix}" echo "dockerParameters: ${dockerParameters}" } } stage('Test2') { steps { echo "resourcePrefix: ${resourcePrefix}" echo "dockerParameters: ${dockerParameters}" } } } } stage("Docker") { options { skipDefaultCheckout true } matrix { axes { axis { name 'COMPONENT' // Note: these values are the same as described in dockerParameters and params values 'image1', 'image2' } } stages { stage("Build") { when { beforeAgent true expression { params[COMPONENT] == true } } // agent { kubernetes(k8sAgent(name: 'dind')) } steps { // Failing on resourcePrefix/dockerParameters, as it doesn't have Field annotation // Question is: why variables are not available inside matrix? echo "resourcePrefix: ${resourcePrefix}" echo "dockerParameters: ${dockerParameters}" // Here is one step as example: //dockerBuild( // image: dockerParameters.images[COMPONENT].image, // dockerfile: dockerParameters.images[COMPONENT].dockerfile //) } } } } } } }
The result is following:
stage `Prepare` goes fine anyway - as expected.
stage `Docker` fails (on each matrix stage) with the message:
groovy.lang.MissingPropertyException: No such property: resourcePrefix for class: groovy.lang.Binding
Until I do not add annotation: `@groovy.transform.Field`
The same with `dockerParameters`, where I have map of different values, which are similar and have some common values.
Note: this is just example, there is parameters, which we use in different stages, and copy-pasting all of them to each stage is not appropriate solution - defining them as common/global outside of `pipeline` block is the only way to do it, isn't it?
Additional info: Version of plugin: 1.8.2 / Jenkins version: 2.235.3 / Any splitting params (described in PR #405) or experimental features was never enabled.
Any ideas?
We found a partial workaround for our pipelines that need to pass around parameters. We used to define a variable but somehow with `params` and `env` not available switching to a `get_params()` method so that that these values are available by then seems to do the trick.
Restart from stage is also working as expected.
def get_params() { return [ gitParameter(name: 'BRANCH', value: params.BRANCH), string(name: 'FOO', value: env.FOO), booleanParam(name: 'SKIP', value: params.SKIP) ] } pipeline { ... build(job: 'other/pipeline', propagate: true, wait: true, parameters: get_params()) ... }
Some of our pipelines include a more complex get_build_params():
def get_build_params(name) { return [job: name, propagate: true, wait: true, parameters: get_params())] }
So the the build call can be as simple as build(get_build_params()) which greatly simplify our jenkinsfiles and reduces copy pasting, especially for some of our test automation pipelines that orchestrate calling many sub-pipelines. Since the various parameters are pipeline specific we do not really want to put it in the library as it would make it much larger than necessary, furthermore the parameters can be branch specific, which makes using a shared library less ideal.
Initially we had `@field my_params = [...]`, but that was failing since `env` and `params` are now missing. We tried to move the variable definition to the first stage under a script block, but that would break `restart from stage` since values are not persisted. This alternative approach recreates the same data over and over, but that's pretty lightweight and seems to be fully backward/forward compatible.
bitwiseman, I've created new bug as this ticket description doesn't follow with my case of issue:
https://issues.jenkins.io/browse/JENKINS-64846
Workaround with Field annotation still force users to fix theirs pipelines, which means - this is breaking changes.
After upgrading my staging environment from 2.277.3 to 2.277.4 and all of my plugins I get now again the error. On production environment the same pipeline works. Plugin pipeline-model-definition-plugin is v1.8.4 on both instances. The JVM property is configured in JENKINS_JAVA_OPTIONS in the file /etc/sysconfig/jenkins of both instances. If I look at System Information I can see other entries from JENKINS_JAVA_OPTIONS like java.awt.headless in both environments, but org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION only in my production environment.
If I run
org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true
in script console the job runs till next restart via Jenkins itself, "systemctl restart jenkins.service" or via booting the server, after that it fails again.
So at the moment I cannot upgrade my production environment anymore.
For reference we have upgraded to 2.277.4 a couple of weeks ago and everything works normally for us.
We do have this set on the command line of the server:
-Dorg.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true
tkleiber With the monitoring plugin are you able to see the JVM arguments and confirm that you do have that CLI option passed properly?
I don't need the monitoring plugin, as I normally can see the entry in "Manage Jenkins" -> "System Properties" and see it in production. If I set this on staging via "Manage Jenkins" -> "Script Console", I cannot see it in "System Properties" and it works only till next Jenkins restart.
I saw the value "true" for entry "org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION" in "System Properties" before my upgrade on staging environment and see it on my production environment, which is not upgraded.
It seems to me, that you start your Jenkins via command line, this is not the case here.
We start Jenkins as a service via "systemctl start jenkins.service" on staging (OS SLES 12) and "service jenkins start" (OS SLES 11) on production. So setting "-Dorg.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true" in "JENKINS_JAVA_OPTIONS" in file "/etc/sysconfig/jenkins" seems the only option for me for our usecase and this has worked before in staging and works in production. Are there any other options to set this for starting jenkins as a service?
bitwiseman just a heads up, the issue number referenced within the built-in Jenkins error message related to this issue has a typo:
It should be this issue, JENKINS-37984 , instead of JENKINS-34987:
General error during semantic analysis: [JENKINS-34987] SCRIPT_SPLITTING_TRANSFORMATION is an experimental feature of Declarative Pipeline and is incompatible with local variable declarations inside a Jenkinsfile. As a temporary workaround, you can add the '@Field' annotation to these local variable declarations. However, use of Groovy variables in Declarative pipeline, with or without the '@Field' annotation, is not recommended or supported. To use less effective script splitting which allows local variable declarations without changing your pipeline code, set SCRIPT_SPLITTING_ALLOW_LOCAL_VARIABLES=true . Local variable declarations found: [variable names]. java.lang.IllegalStateException: [JENKINS-34987] SCRIPT_SPLITTING_TRANSFORMATION is an experimental feature of Declarative Pipeline and is incompatible with local variable declarations inside a Jenkinsfile. As a temporary workaround, you can add the '@Field' annotation to these local variable declarations. However, use of Groovy variables in Declarative pipeline, with or without the '@Field' annotation, is not recommended or supported. To use less effective script splitting which allows local variable declarations without changing your pipeline code, set SCRIPT_SPLITTING_ALLOW_LOCAL_VARIABLES=true . Local variable declarations found: [variable names].
Workaround here in Jenkins LTS 2.289.1 and latests plugins does only work when activated via script console, not via JENKINS_JAVA_OPTIONS in /etc/sysconfig/jenkins. So it works again only until restart jenkins.
tkleiber We have not upgraded to LTS 2.289.1 yet so cannot confirm, but it seems your /etc/sysconfig/jenkins is not being applied when your Jenkins instance is launched. You need to check that the java process has the -D option passed to its command line. You can check that with the monitoring plugin
Or if you get shell. access tot he server run ps auxwww
Yes - you are right!
Because the staging server was upgraded from SLES 11 to 12 too, the service definition has changed from service to systemctl.
From Installing Jenkins as a Unix daemon - Jenkins - Jenkins Wiki the production server use the "Java Service Wrapper" configuration, which use /etc/sysconfig/jenkins.
The staging server now use the "OpenSuse" "Linux service - systemd" configuration from this link, which does not use /etc/sysconfig/jenkins anymore.
Have added now the JENKINS_JAVA_OPTIONS from /etc/sysconfig/jenkins to ExecStart parameter in /usr/lib/systemd/system/jenkins.service directly and all works again!
Thanks!
tkleiber
I'm glad you were able to figure out the problem.
tkleibersodul moskovych jmcclain
How is the feature behaving for you? Do you have any feedback, comments, observations? I'm trying to evaluate it's readiness for wider use.
How is the feature behaving for you? Do you have any feedback, comments, observations?
bitwiseman For reference, initially one of my larger pipelines stopped working, so I tried the
org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true
workaround, however it just resulted in a different message about needing to set
SCRIPT_SPLITTING_ALLOW_LOCAL_VARIABLES=true
in order to use variables defined outside of my pipeline, and even then I still needed to add "import groovy.transform.Field" and "@Field" declarations to my variables and the "env." prefix seemed to stop being recognized by Jenkins for defining environment variables within my pipeline, etc.
Eventually I just moved some of my pipeline stages to a downstream helper job to get the overall pipeline working again, which I'm guessing is the recommended approach anyways rather than manually changing the experimental settings for SCRIPT_SPLITTING_TRANSFORMATION and SCRIPT_SPLITTING_ALLOW_LOCAL_VARIABLES to true.
I'd say it definitely seems to be a bit of a breaking change, but if you think the optimization is worth it then I don't really mind. I feel like the error message could be a bit more intuitive though, maybe something like:
"Your declarative pipeline code is [x]kb which exceeds Java's maximum bytecode size of 64kb and therefore can't be parsed by Jenkins. Consider moving some stages to downstream pipelines or splitting your pipeline into multiple smaller pipelines to reduce your code size to satisfy Java's 64kb limit. Alternately, set org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true as a workaround. See Jenkins-37984 for more details."
> bitwiseman: How is the feature behaving for you? Do you have any feedback, comments, observations?
With this feature our main declarative multi branch pipeline works only with the SCRIPT_SPLITTING_TRANSFORMATION feature, without it we would have to go to back to classic up-/down-stream approach. We don't use variables outside of the pipeline at the moment. All other pipelines are small enough.
We use here trunk based development in a monorepo for our main loan application with different backend and frontend technologies. And not all are implemented at till now.
Despite we try to move a lot of logic to pipeline libraries there remains a lot of stages because of when conditions depending on branching model and repository names (eg. for testing jenkins staging). Furtermore we need different pipeline stages for environments like development, test and production or for different controllers for building on different operating systems.
One thing we miss at the moment is better parallel support as other systems like UC4 have. Eg. parallel in parallel and the corresponding visualization in blue ocean.
> bitwiseman: How is the feature behaving for you? Do you have any feedback, comments, observations?
We are not using the SCRIPT_SPLITTING_TRANSFORMATION set (by default it false, right?).
Our pipelines mostly use methods/functions from Jenkins Shared library and
all pipelines contains some global variables before pipeline block (variables with some groovy logic, which used in more than 2 stages, or should be defined as global).
The example of pipeline you may take from this issue description: JENKINS-64846
Pipelines are separated from functions, so - no pipeline blocks in call functions for shared library, like it was shown here: JENKINS-64846?focusedCommentId=407258
bitwiseman, I know, this is beta, but is there any documentation available for description of flags and behavior of pipelines? It would be good to have examples without diving in the plugin source code. Especially with our approach of using groovy outside pipeline block.
As I want to test a specific library branch I try to use follwing notation:
@Library('shared-libraries@feature/test-shared-library') _ pipeline { // long pipeline here }
Therefore I tried to use following properties combined:
-Dorg.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true -Dorg.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_ALLOW_LOCAL_VARIABLES=true
But as soon as I add the second parameter, the first will not work anymore. Is this the intended behavior? So I cannot use local libraries in big pipelines? Or do I have to do this in another way?
Jenkins 2.387.1 on SLES 12.5.
> How is this still an issue in 2024?
Because the issue is the result of a fundamental design decision.
> Because the issue is the result of a fundamental design decision.
Of using Jenkins in the first place. So glad I moved away from it. And I can finally get work done instead of fighting made up issues all the time. Never been happier.
Within the better part of a decade, I have been using declarative pipelines. I have found that they are very expressive as well as very easy to read and to maintain. It's more powerful than GitHub Actions yaml files. However, it sounds like from the latest comments that everyone else has abandoned the declarative pipeline. Am I wrong in this?
I have used scripted sections within my declarative pipelines for things that I can't easily do within the constrains of the declarative style. However, the idea of making a purely scripted pipeline seems potentially messy. If you've abandoned the declarative pipeline, what have you moved on to instead?
We further use succesfully declarative pipelines with the workaround configured.
> If you've abandoned the declarative pipeline, what have you moved on to instead?
Never used declarative pipeline to begin with. Jenkins desperately trying to be a "platform". It is not a "platform". It is a cron server with web interface. It can be turned into a "platform" by "platform" engineers, in which case, pipeline would be generated automatically - it doesn't have to be readable or declarative, as that would only add artificial limitations and made up issues. Which it is. Jenkins can never be a "platform" as everyone's last mile challenges are going to be very unique. Jenkins fails to understand that, but it is very successful at alienating "platform" people who at one point were the biggest advocates and helped organizations to adopt it. No more.
As Jenkins tries to be a "platform", it also tries to be smart about the confused deputy problem. Which, again, not its to solve - it only getting in the way for those of us who actually need to solve it.
There might be some amount of users for whom Jenkins does solve last mile issues out of the box, and they do not require "platform" engineers. Good for them - but I would argue they are not doing anything complex to begin with, and would probably be better off with something much simpler and less maintenance heavy, like maybe GitHub Actions ARC. The fact that in 2024 Jenkins still cannot even restart without a downtime, not to mention horizontal scaling, not to mention in-memory state direct serialization into XML on the disk, not to mention crazy IOPS utilization, is a joke. They missed the kubernetes memo. Any relatively large Jenkins deployment becomes a maintenance nightmare. I am running GHA ARC for CI and ArgoCD for CD, for over a year now - I maybe spent 30 mins top on it's maintenance the entire year, and my users had zero service interruptions. State is distributed, everything is scaled horizontally... I have so much free time now, to write this message, for example.
Others seem to have same problem: https://confluence.atlassian.com/jirakb/groovy-script-cannot-be-executed-due-to-method-code-too-large-error-1063568679.html
bitwiseman
I used jenkins/jenkinsfile-runner as the base docker image. Added the hpi files from your links to /usr/share/jenkins/ref/plugins/ and installed the rest of the required plugins using jenkins-plugin-manager. I ran docker with -e JAVA_OPTS="-Dorg.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true"
To be fair, I didn't actually get my jenkinsfile pipeline running. I'm still learning how to use the jenkinsfile-runner but instead of the "Code too Large" errors, I got errors that I didn't have any agents with the correct labels. At least I know that Jenkins was able to load my jenkinsfile without crapping out. I still need configure my dockerfile agent with other docker agents.