-
Bug
-
Resolution: Unresolved
-
Blocker
-
None
-
Powered by SuggestiMate
There is partial fix for this for Declarative pipelines in pipeline-model-definition-plugin v1.4.0 and later, significantly improved in v1.8.4. Due to the extent to which it change how pipelines are executed it is turned off by default. It can be turned on by setting a JVM property (either on the command-line or in Jenkins script console):
org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true
As noted, this still works best with a Jenkinsfile with pipeline directive as the only root item in the file.
Since v1.8.2 this workaround reports an informative error for pipelines using `def` variables before the pipeline directive. Add a @Field annotation to those declaration.
This workaround generally does NOT work if the pipeline directive inside a shared library method. If this is a scenario you want, please come join the pipeline authoring SIG and we can discuss.
Please give it a try and provide feedback.
Hi,
We are getting below error in Pipeline which has some 495 lines of groovy code. Initially we assumed that one of our methods has an issue but once we remove any 30-40 lines of Pipeline groovy, this issue is not coming.
Can you please suggest a quick workaround. It's a blocker for us.
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
General error during class generation: Method code too large!
java.lang.RuntimeException: Method code too large!
at groovyjarjarasm.asm.MethodWriter.a(Unknown Source)
at groovyjarjarasm.asm.ClassWriter.toByteArray(Unknown Source)
at org.codehaus.groovy.control.CompilationUnit$16.call(CompilationUnit.java:815)
at org.codehaus.groovy.control.CompilationUnit.applyToPrimaryClassNodes(CompilationUnit.java:1053)
at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:591)
at org.codehaus.groovy.control.CompilationUnit.processPhaseOperations(CompilationUnit.java:569)
at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:546)
at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:298)
at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:268)
at groovy.lang.GroovyShell.parseClass(GroovyShell.java:688)
at groovy.lang.GroovyShell.parse(GroovyShell.java:700)
at org.jenkinsci.plugins.workflow.cps.CpsGroovyShell.reparse(CpsGroovyShell.java:67)
at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.parseScript(CpsFlowExecution.java:410)
at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.start(CpsFlowExecution.java:373)
at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:213)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:410)
1 error
at org.codehaus.groovy.control.ErrorCollector.failIfErrors(ErrorCollector.java:310)
at org.codehaus.groovy.control.CompilationUnit.applyToPrimaryClassNodes(CompilationUnit.java:1073)
at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:591)
at org.codehaus.groovy.control.CompilationUnit.processPhaseOperations(CompilationUnit.java:569)
at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:546)
at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:298)
at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:268)
at groovy.lang.GroovyShell.parseClass(GroovyShell.java:688)
at groovy.lang.GroovyShell.parse(GroovyShell.java:700)
at org.jenkinsci.plugins.workflow.cps.CpsGroovyShell.reparse(CpsGroovyShell.java:67)
at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.parseScript(CpsFlowExecution.java:410)
at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.start(CpsFlowExecution.java:373)
at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:213)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:410)
Finished: FAILURE
- is duplicated by
-
JENKINS-50033 Method code too large using declarative pipelines
-
- Closed
-
-
JENKINS-72290 Encountering method too large error
-
- Closed
-
- is related to
-
JENKINS-61389 Pipeline Matrix return a "Method code too large!" on a really short pipeline
-
- Closed
-
-
JENKINS-56500 Declarative pipeline restricted in code size
-
- Reopened
-
-
JENKINS-64846 Pipeline with Matrix doesn't see variables outside pipeline block
-
- Resolved
-
-
JENKINS-61389 Pipeline Matrix return a "Method code too large!" on a really short pipeline
-
- Closed
-
- links to
[JENKINS-37984] org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed: General error during class generation: Method code too large! error in pipeline Script
Surprised that breaking it up into separate methods did not help. Other than that, I have no suggestions offhand.
Yeah it's kind of puzzling – as part of my testing I had reduced the number of stages significantly.
So a version that worked had 4 stages with no methods and it was about 1720 lines.
With the full set of stages (27), the number of lines is about 5300 lines. Breaking it up into methods had no method larger than about 450 lines, but it wouldn't run.
I also went in and removed the 4 stages that had worked, which put the entire script down to ~3600 lines broken out into methods, and that wouldn't run either.
Whoops. Well cancel what I said. When I generated the code with what I intended to be methods I was actually defining closures attached to a variable. When I did it properly as methods it's ok with it.
So more specifically, for those of you who find this, what I used to have was:
stage foo parallel([ ... giant list of maps ... ])
What I changed it to was:
stage foo def build_foo() { parallel([ ...giant list of maps... ])} build_foo()
I made one – I still think this case should be touched on in the Pipeline docs. I don't think searching Jira or StackOverflow is a substitute for documentation
problem seems to be linked with the size all the script but I remarked it's linked also according number and size of stages.
A workaround consists to declare a function called in some big stages.
before :
stage ('build') {
....
if (.....) {
} else {
}
.....
}
workaround
def build() {
....
if (.....) {
} else {
}
.....
}
stage ('build') {
build()
}
So this is happening because of how CPS transformation works - everything's getting wrapped in a single method behind the scenes, and that's ending up being too large. The best answer we've got is to move things out into shared libraries.
Any workarounds? My scenario is quite specific, we threat Jenkinsfile as some sort of a JIT. No one working with it directly, it rather being generated automatically based on something else. It is essential for the debugging purposes to have a big picture perspective on how the resulting Jenkinsfile going to look like, and there is no benefits into splitting it or moving to the shared library (again this is not for humans and not made by humans).
We need a way to increase this limit.
i use as "work a round" a script blok, for me it was a lot of work to change it but the result was smaller code witch is always good
stages {
stage('parallel stages') {
steps {
script {
in my case i have 200-300 parallel stages
i generate the parallel stages with a single function and i execute more as 200-300 stages in parallel, this is very dynamic as this amount of stages depends on user input. So i use now script block to generate all the stages and execute the stages instead of using declarative Pipeline.
The thing is I actually don't want to hide this code to the library. My library already does it's thing, I have a custom DSL syntax that my library goes through and generates resulting Jenkinsfile. And I follow TDD, so I got tests, and my tests basically checking the resulting Jenkinsfile vs given my own DSL. Hiding parts of Jenkinsfile into the library going to ruin the whole idea of testing for my project. Again the Jenkinsfile for me is kind of JIT, it's not a code to being read or created by human.
I am running into this, example file:
https://pastebin.com/raw/fnKnYiMq
If I rip out some env declaration blocks like:
script{ env.CI_IMAGE = env.DEV_DOCKERHUB_IMAGE env.CI_TAGS = env.EXT_RELEASE + '-pkg-' + env.PACKAGE_TAG + '-dev-' + env.COMMIT_SHA env.CI_META_TAG = env.EXT_RELEASE + '-pkg-' + env.PACKAGE_TAG + '-dev-' + env.COMMIT_SHA }
It will compile up and run, I already tried to move every script section out to functions, but still get Method to Large.
This is only ~800 lines and even if you compiled this text 10x over it would not be at the 64k java limit.
I am trying to understand how the workflow is interpreted that could result in this limit being hit.
+1
it is a real headache
about 800-900 lines of code
could you please enforce fixing this issue?
Hello Guys,
I am trying to find a workaround but after some testing I am not able to determine how Jenkinsfile is being parsed in case of Scripted Pipeline. For me it seems that whole file is going to JVM at once, thus gives this limitation of 800-900 lines of code like mentioned above. I tried to artificially split the code into different node blocks which uses the same runners like:
node('master'){ stage stage stage }
node('master'){ stage stage stage }
node('master'){ stage stage stage }
but it makes no difference. Is there any way to modify code structure so it would be compiled in chunks? Am I able to load some other Jenkinsfiles dynamically? Or maybe it is possible to move some code to shared groovy libraries (but then I would need to call http request plugin from libraries, which I don't know whether is possible)?
I also did some additional, maybe even stupid test to check how many instructions is too much to compile and I was shocked a bit, when I saw ArrayIndexOutOfBoundsException when I had something above 400 print's invoked in the code divided into 3 stages. Is it that the instructions are held on stack and then send to JVM? How come declarative pipeline is so easy to be split and scripted version is not?
I would very much appreciate any workaround to move on with dev.
Best Regards,
Szymon
It may sound obvious, however it worked for me when I extracted some methods of reusable code in Jenkinsfile and those are compiled in such a way that I can put more code into those. I don't know where my fixation came from, but I thought that methods are only available in declarative pipeline whereas in scripted pipeline it should be moved to for instance shared libraries. So now my code structure looks like this:
node('master'){
stage('setup')
stage('cleanup')
{ some logic method2() method3() }}
def method1() {}
def method2() {}
def method3() {}
I'm starting to run into this issue myself when my pipeline reaches about 800 lines. I've been creating helper function outside of the "pipeline" brackets and that's helping but I find myself still running into this issue more than I'd like.
I can't help it. Having all the new nice features makes me write longer more useful pipelines.
800 lines of code in one file sounds bad in any language. It definitely needs a refactoring and shared libraries.
Just to be clear - in my case above I experienced that issue while already having my libraries, and it was due the way I designed these libraries (they treat Jenkinsfile concept as some sort of JIT, they basically doing some calculations based on the input and spits out resulting long Jenkinsfile that I then eval() ed). I solved my case by splitting what I eval() into chunks exchanging the data through the context singleton object in my library (surprisingly singleton instances were not per Jenkins master JVM but per library instance, i.e. per individual build). So technically my case wasn't even related to Jenkins at all. I was sending too long string into eval() method and it was legitimately (accordingly to JVM) giving me a finger. Just to give an example, my chunks would look like:
getContext('stages')['Test Stage'] = { echo 'hi' }
getContext('stages')['Second Test Stage'] = { echo 'hi again' }
timestamps { ansiColor("xterm") { stage('Test Stage', getContext('stages')['Test Stage']) stage('Second Test Stage', getContext('stages')['Second Test Stage']) } }
Given that I think this issue may be closed now.
llibicpep, My pipeline is long because it contains more than just unit testing. It's a complete DevOps pipeline with optional stages depending on the situation. I have sequential stages, parallel stages and most of these have a post section that cleans up or depends on the success or failure of the stage. It's very declarative and pretty easy to reason with so there really shouldn't be a reason to refactored the code.
Jenkins has become very very powerful tool with the Pipeline DSL with a lot of very useful features. It's a shame when I when I can't use a feature because my pipeline contains too many lines already.
henryborchers, complexity of your pipeline is completely irrelevant. Just to draw a parallel, the fact that you're creating complex Enterprise product does not justify you to put all it's code in a single file, or even in a single method, does it?
Parts of your pipeline has to be a reusable functions in the shared library, so your actual jenkins file should consist only simple statements, something like
doThis()
if (itsTrue()) {
doThat()
}
That applies only for scripted pipelines, of course. Accordingly to https://jenkins.io/doc/book/pipeline/shared-libraries/#defining-declarative-pipelines it sounds like declarative pipelines even more limited than I thought, so I'm happy I don't use 'em.
Just a quick searching in google reveal there is actually an open source project doing something good enough to demonstrate what I mean:
https://github.com/fabric8io/fabric8-pipeline-library
https://github.com/fabric8io/fabric8-jenkinsfile-library
henryborchers, complexity of your pipeline is completely irrelevant.
Couldn't agree more
Just to draw a parallel, the fact that you're creating complex Enterprise product does not justify you to put all it's code in a single file, or even in a single method, does it?
It's not a complex Enterprise product. Quite the opposite. I don't have much support so I have to automate as much DevOps stuff myself. Because I have very little resources and stakeholders that require a lot, I'm making the most of the resources I can get my hands on.
Parts of your pipeline has to be a reusable functions in the shared library, so your actual jenkins file should consist only simple statements, something like
doThis() if (itsTrue()) { doThat() }
That applies only for scripted pipelines, of course. Accordingly to https://jenkins.io/doc/book/pipeline/shared-libraries/#defining-declarative-pipelines it sounds like declarative pipelines even more limited than I thought, so I'm happy I don't use 'em.
I'm happy that you're happy that you don't use declarative pipelines. However, I do. The only limit I've run has been line length.
Just a quick searching in google reveal there is actually an open source project doing something good enough to demonstrate what I mean:
https://github.com/fabric8io/fabric8-pipeline-library
https://github.com/fabric8io/fabric8-jenkinsfile-library
Yes. I already use shared libraries for somethings already. I just don't use need it very often.
The only reason my pipelines are so long is because of the declarative style.
stage("Run Doctest Tests"){ when { equals expected: true, actual: params.TEST_DOCTEST } steps{ bat "pipenv run sphinx-build -b doctest docs\\source build\\docs -d build\\docs\\doctrees -v" } post{ always { dir(reports){ archiveArtifacts artifacts: "doctest.txt" } } } }
It's more verbose but I find it highly readable and very easy to maintain. Shared libraries are nice for helpers but they can be a pain for maintain so I keep them simple.
I don't have much support so I have to automate as much DevOps stuff myself. Because I have very little resources and stakeholders that require a lot, I'm making the most of the resources I can get my hands on.
That is somewhat twisted conclusion. Shared library as a layer of abstraction helps to maintain and simplify your life, as long as you use it right. Same as the possibility to run unit tests for that code. Otherwise it's just a snowflake, it may look beautiful from a certain angle but that is your first symptom you're getting right there.
However, it's a pointless discussion, I am just a user the same as you, I do not making that call, but even then I don't see what Jenkins can possibly do as this is a JVM limitation. The Jenkinsfile essentially is Groovy DSL file which ends up being executed as a method. I can imagine they could possibly do some semantic analysis and try to automatically split it into chunks or obfuscate parts of it into a separate functions, but the level of effort needs to be put in it is nowhere close to the potential benefit. So I wouldn't expect any resolution to that issue.
I have a specific example where I have 174 individual git repos that I want to sync and run Simian and other static code analysis on to check for warnings/errors/duplicate code, etc.
I have even tried breaking the checkout down into multiple stages. How do I fix this? I have tried moving it out to a function but that does not work. I have tried breaking down into multiple (12+) stages and also does not work.
My total script is 224 lines long (64K). Not sure this should be too big.
Getting same error while using much smaller pipeline, (actually it is delivered as shared library), whole thing (*.groovy) with comments and long property names has just slightly > 64kb.
I guess when it is translated to one great method it is obfuscated / optimised.
jglick is there a way to monitor the size of transformed pipeline? Some output or custom job to monitor it would be great?
I do not foresee anyone spending much effort on this issue, as that would distract from the goal of moving Pipeline execution out of process, rendering this moot.
Could you share some details on "moving Pipeline execution out of process", like is this on roadmap somewhere?
I don't need rendering, would be enough to output pipeline size during the build so I know how far I am from hitting this. Something like this:
println Jenkins.instance.getPipelineSize('pipeline.groovy').toString()
we just hit this issue today on our many stages mainly used to run many integration test suites
quas It is on the roadmap and has a couple engineers working hard on it (myself included), although we're not quite ready to demo or announce something (a few more pieces need to fall into place).
svanoort any updates on this issue? I see nobody assigned to it right now, but I've just hit the same issue on our CI.
avsej as discussed above, I doubt anyone is planning to spend time on this.
jglick, but last comment a month ago was about working hard on it. So I thought there might be some progress, or at least assignee for the issue.
No, the last comment was about working on a completely different execution engine that would not suffer from this class of bugs by design.
Any update or resolution on this issue.
Our production deployment is stuck due to this pipeline size restriction.
Pipeline scripts fails in case it contains more than 78 build job statements in total. Right now, we are splitting the complete script into smaller pipeline scripts and then running all in predefined sequence, provided a single parallel cannot contain more than 78 builds, currently
But its production and due to increase in server list, this count will go beyond 100 or more at some point of time and hence this pipeline job concept wont be of any use.
Therefore requesting you to expedite on this and provide us the update.
Any news on this, like https://issues.jenkins-ci.org/secure/ViewProfile.jspa?name=naval_gupta01 We are also suffering within our Jenkins system due to this issue
We are on Jenkins 2.150.2 and Groovy Plugin 2.60. We started getting this error when we upgraded Groovy to 2.6.1. Our pipeline is about 690 lines. It is a workflow to deploy 2 application components to staging servers and then to prod servers. Please let us know when we can get an update on this.
I'm running into this more and more. I've put what makes sense into shared libraries and squeeze more and more into ugly "helper functions" outside of the pipeline but it's getting really hard.
I really hope there is a better solution coming soon because I'm pretty sure that my coworkers are working with HR to have a talk with me about how much swearing I've been doing.
Even when putting everything into shared libraries, my pipeline code is still 700+ lines.
I had to split my code up into 2 jenkins jobs because of this issue.
svanoort, it's been a few months since you teased us about something on your roadmap that would alleviate this issue. Any chance you could provide a little more info or at least tease us enough to wet our appetite? I'm running against the limit way too often these days.
I am not aware of any plans to work on this issue. I tend to doubt it is fixable in the current Pipeline execution engine (workflow-cps), beyond better reporting the error. The known workaround is to split long blocks of code into distinct methods.
jglick, a few post ago you mentioned something about creating a different execution engine that wouldn't have this issue. Anything you can point me to so that I can follow the progress or, at least, is anything interesting that you can tease to keep me hopeful that the future looks bright?
I have been putting any "steps block" that is more than 1 line into into helper functions but I'm still running into issues.
I'm sorry if I come off as nagging. I just really love Jenkins. The declarative pipeline has been my one of favorite tools which I used for everything I build,
henryborchers I believe that work is now inactive. I am afraid I have no particular advice for now beyond:
- If at all possible, remove logic from Pipeline Groovy code and move it into external processes. We see a lot of people with these very complex calculations that could better have been done in some Python script or whatever in the workspace, so that the Pipeline part would boil down to just node {sh './something'}.
- When not possible (for example because the Pipeline script is actually required to determine how to configure steps such as parallel or Jenkins publishers), split up long functions (including the implicit main function at top level of a Jenkinsfile) into shorter functions.
Any `Jenkinsfile` of any complexity can be shortened to just one line that looks like `doStuff()`. Does it makes sense to do it that way? Probably not but hopefully gives an idea as to where to move with this issue.
Let me re-phrase and sum-up some of the questions in that thread:
I'm writing off my application and even though I moved bunch of the stuff into separate methods I still keep all of my high-level app working flow in my `main()` method and it is still to big and java complains about that. Can you fix it plz?
I think the ticket may now be closed.
Why you are talking about moving stuff into functions, and mention flows? This problem also reproducing in declarative pipelines, which do not have code at all. They just describes steps and each one is single-line and invokes built-in function.
Well if a declarative pipeline is so big that it won't fit into the limit - clearly the definition of a step needs some re-thinking. Sounds like a layer of abstraction required to wrap a multiple commonly-reusable steps into one to reduce the amount.
are you saying that all steps I'm using in my pipeline also counted and inlined into result class object? stuff like zip(...), archiveartifacts, etc.?
Entire Jenkinsfile effectively becomes a body to some sort of `eval` function under the hood (for the sake of simplification let's forget about CPS and stuff). It's no difference if you split the code into the methods and still keep the methods in Jenkinsfile. Methods needs to be moved into shared library or otherwise made available in the Jenkinsfile scope.
No matter declarative or scripted pipeline - it's just directives that are effectively a Groovy closures. Standard java rules still applies no matter what.
This problem also reproducing in declarative pipelines
If true then it may be feasible to provide a fix in the pipeline-model-definition plugin, even without a general fix for Scripted.
We get this error in a purely declarative pipeline just by the sheer amount of
stage { when { ... } agent { ... } steps { ... } post { success { ... } failure { ... } cleanup { ... } } }
Add parallels, rinse, repeat — Method code too large.
Our steps {} are already a call to a single function.
This would be great if we could produce the stages in a separate function or file, but so far we can't find anything with regards to how to possibly go about it.
I'm seeing the same thing as 'efo plo' in declarative pipeline. Only defining the flow in the pipeline with stages, and calling shared libraries and functions to execute code.
I now have 41 stages inside my pipeline{}. Adding just one more stage gives me this error.
It would be helpful if someone observing this issue in Declarative could create a new issue in the pipeline-model-definition-plugin component (Link ed to this one) attaching a minimal, self-contained Jenkinsfile reproducing the error in a specified version of Jenkins and the workflow-cps (Pipeline: Groovy) and pipeline-model-definition (Pipeline: Declarative) plugins. I am not making any promises but that would at least improve the odds of a targeted fix for that case. (Bonus points for a pull request to jenkinsci/pipeline-model-definition-plugin adding an @Ignore d test case demonstrating the error.)
you can only move your code at the script level in classes but you can't do this at the stage(s) level, which restricts drastically any refactoring.
i have a parallel job running on 6 different platforms, "same" stages, but i have to copy-paste stages in a monster pipeline, which ends up into a 1130 lines call method.
this is really a deal breaker in pure declarative.
you can only move your code at the script level in classes but you can't do this at the stage(s) level, which restricts drastically any refactoring.
That's not entirely true. What I'm going to say might be too advance level for many users, but pipelines that hitting this limit to me sounds pretty advance. Shared library can provide your own custom declarative syntax (or imperative if you prefer). Shared library then, being a layer of abstraction, can calculate resulting Jenkinsfile (either declarative or scripted) based on your input and send it to `eval()`.
llibicpep i am afraid i do not get your point.
scope is pure declarative. hence, i doubt you can wrap any code but inside script { } out of your main pipeline.
if you can do so, then please post a snippet of code, thank you.
Added a sample Jenkinsfile reproducing the problem.
It is also available at https://pastebin.com/eDVppFjm
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
General error during class generation: Method code too large!
java.lang.RuntimeException: Method code too large!
at groovyjarjarasm.asm.MethodWriter.a(Unknown Source)
at groovyjarjarasm.asm.ClassWriter.toByteArray(Unknown Source)
Any and all ideas with regards to how this may be refactored are more than welcome.
The very basic example is all there https://jenkins.io/doc/book/pipeline/shared-libraries/#defining-declarative-pipelines. You don't define parts of the pipeline but entire pipeline inside of the library. Example is pretty primitive, my idea is that the library can implement a number of custom closures to make available a following syntax:
myPipeline("platform") { foo "bar" deploy "baz" }
In other words making the input Jenkinsfile nothing more than a metadata file explaining what's inside of this repository. Shared library in turn can do calculations based on input above and pretty much do a lot of templating work in order to produce real Jenkinsfile to sent it to `eval()` or execute inline.
Another example https://jenkins.io/blog/2017/10/02/pipeline-templates-with-shared-libraries/
In my understanding, that would do nothing except moving the error to the stage of compiling the shared library method defining this pipeline. But if you could attach a pastebin that would compile that pipeline, I'd be more than happy to be proven wrong. Please note that while I simply replicated the same stage over and over, in our practice every stage is calling different functions (all of which are already defined in a shared library).
Well, if you do something like this but resulting Jenkinsfile produced is still in declarative style, you probably don't solve anything as the size of it will still be huge. But the point is - you just came up with your own declarative style that is simpler, smaller, and better suits your project or organization as being designed specifically for it. So you probably don't need declarative style being produced any more as it becomes nothing more than an intermediate format between the shared library and Jenkins (sort of JIT in Java world, really just a language that machine talking to a machine). There's no point in all that declarative style limits. In turn, if produced result is in scripted format - can be split into chunks and eval() separately, for instance per stage.
I guess my point is - declarative style just sucks. From my experience it works only for small and simple projects despite how hard it's being advertised an opposite.
As it is, I am not looking to come up with my own declarative style or a specific custom-fit design for my organization. I am just trying to do my job fighting this "Method code too large!" error.
Well then you may be trying to apply a wrong tool for the job. Try some commercial offers maybe.
Well then you may be trying to apply a wrong tool for the job. Try some commercial offers maybe.
I understand your be frustrated but comments like this are not very helpful.
I guess my point is - declarative style just sucks. From my experience it works only for small and simple projects despite how hard it's being advertised an opposite.
I couldn't disagree more. It's super easy to read and understand. It has a clean syntax. It's well thought out. It's expandable with scripts if need be. It's so easy to to look through the blue ocean to see why something has failed. It's freaking brilliant!!!
The declarative pipeline is the main reason that I use Jenkins and why I advocate for it wherever I go. Did I mention it's brilliant? The biggest issue is that it is so easy to use, that I just want to use it more and more until I get smacked in the face with "Method code too large!. It's a little bit like the use of electricity over the last century. Most electronic devices have become more efficient throughout the decades and use less electricity as their designs have improved. However, we keep finding new uses for electricity at a faster rate than we can make them more efficient. We end up needing more and more power plants as a result, not fewer.
I feel the same way with Jenkins and especially the declarative pipeline. I've been getting the lines in my steps into fewer and fewer lines. I make shared-libraries for my common code and "helper script functions" outside of the pipeline block. However, I keep finding more and more ways to find problems in my code early with careful placement of "sequential and parallel stages" and "when and post" blocks. I end up with lots of stages with one or two lines in the steps block. Then all of a sudden, I run into "Method code too large!"... And then I become a sad developer...
I understand your be frustrated but comments like this are not very helpful.
Well sometimes truth is not helpful at all. What's your point? Maybe you have a solution or even a PR? Enlighten us.
I couldn't disagree more.
You didn't seem to read my suggestion careful enough. Declarative approach, indeed, is the future. Particular Jenkinsfile declarative implementation sucks, though. Because you kind of use declarative style but you still have to imperatively define steps. What a bummer!
If you read careful enough what I said, you'll see what I opted in for is rather to have a small meta data file in the repository that explains facts about the app inside, so that the library can deal with it and render a pipeline for it. This render body is much easier to work with when it's scripted style, and the fact that it's scripted doesn't matter because users don't directly deal with it.
hello,
Is there any solution or workaround for this?
This is open from last 3 months !! Please provide some solution to this......
I have a strange scenario with this error. So I have jobs which are huge in size the 1st job is about 1200 lines long and it works completely fine and the 2nd job is about 850 lines and it got this error. And by removing some lines from the code worked fine.
There is a work around, of sorts. There is a fixed limit to the size of the groovy. No way around that limit, but you can do things to minimize what's in groovy.
Eliminate clutter. For us, that was stuff like echo statements, and unused variables. You might also be able to alter the jenkins job's configuration rather than setting options in the groovy.
A better answer is to shift large blocks to scripts. For example, if you have a bunch of "sh" or "bat" commands in a row, put them in a script file, then invoke the script from groovy.
Good luck. This limit should still be fixed (or raised). You just cannot get to enterprise worthy pipelines with it.
Hello,
this is causing troubles also on my company...I think that a solution could be moving bash commands to an external file or using groovy libraries, but during initial development phase of pipeline I usually put all the code in the a single file: it is not maintainable, but it is what I need to quickly develop and test new pipelines.
I'll be really grateful to Jenkins developers if they can solve this issue
BR,
Alessio
Hello,
I'm just fiddling with groovy shared libraries, it helped me reducing code size a bit, but now I'm pretty stuck since I cannot reduce it anymore, so each line I add is causing the issue...maybe I'm not using shared libraries in the correct way? Does anyone have some hints on this?
BR,
Alessio
Workaround:
Move your code into different scripts inside an extra Jenkinsfile Repository (or in your Build-Repository), check those files out, load them into variables and call the code as function.
Example
Jenkinsfile (Main executed)
node(){
checkout Jenkinsfile-repo
def HelperScript = load("path/to/helperscript.groovy")
Helperscript.DoYourWork()
}
helperscript.groovy
#!groovy def DoYourWork(){ //Do something that doesn't work because too much to load in initial script } //Important statement for loading the script!!! return this
As the script is not loaded initially it can compile the main Jenkinsfile.
Hope that works for more people then just me
mueck If one can't define a pipeline inside `DoYourWork()` function — which I suspect is not the case, though I haven't tried it — this does not solve the original issue.
I am not sure what you mean with defining a pipeline. If it means to load your properties, then you can still do that in the main Jenkinsfile, or is yours so huge that this alone invokes the?? java.lang.RuntimeException: Method code too large! ??exception?
Just to clarify this issue.
When Jenkins compiles the Code to start the build, it fails because the code is too large. Code that is loaded "on runtime" is not loaded in that moment, means you can make you compiling code smaller by sourcing it out into a file that you load while building. The compiled code can still have a decent size though, means everything you need to "define" your pipeline.
I know that this works as I encountered the Method code too large exception and am working around it with my "solution"
mueck Please see the attachment, if you can work it out I will be more than happy.
Carsten's approach seems to suggest that the behavior of all node() elements can be offloaded to subordinate groovy scripts. Rather than having the developer do it manually, wouldn't it be nice if that's what Jenkins did for you. In other words, jenkins parses the developer provided jenkinsfile, and manufactures script-with-load and the loaded-scripts that it then works off of. This would then be a true solution.
This PR will address this issue for declarative scripts that do not use "def" variables before the "pipeline {}" block.
https://github.com/jenkinsci/pipeline-model-definition-plugin/pull/355
The PR above does basically what you describe - but only for declarative pipelines that do not use `def`s. Read on for more info.
(and anyone else interested):
When Jenkins compiles the Code to start the build, it fails because the code is too large. Code that is loaded "on runtime" is not loaded in that moment, means you can make you compiling code smaller by sourcing it out into a file that you load while building. The compiled code can still have a decent size though, means everything you need to "define" your pipeline.
The underlying issue is that the Java classfile specification limits methods to 64k of byte code. In a related point it also limits the number of constants in a single class to 64k constant items (not size but number of items).
NOTE: as far I can see this is a question of not writing the output to file - the limitation is on the binary structure of byte stream. Regardless of whether you load a class from File on disk or from byte stream in memory, if you try to create a Java class that violates these limits it will fail.
By default, the entire Jenkinsfile is run as part of script initialization - one method. If you break your pipeline up in the multiple methods things get better (for a while), however each new method you make must also not violate method size limit. Further, eventually your pipeline will encounter another limit - constants per class. You can work past this by further dividing your pipeline into classes.
The problem with dividing into classes is that `def` variables added to root of the script are not accessible from those other classes. I have not found a solution to that issue, which is why the above PR doesn't address the issue for Declarative pipelines that use `def` variables. It is possible that we could detect which parts of the pipeline refer to `def` variables and keep those in the same class, but it is involved and likely to be error prone.
If we focus on Declarative only, we could add some way to initialize variables in a directive instead of using `def`s, then we declarative would be free to split functions and classes as needed, while still preserving the behavior people need from `def`s.
If you are using Declarative:
There is partial fix for this in v1.4.0. Due to the extent to which it change how pipelines are executed it is turned off by default. It can be turned on by setting a JVM property (either on the command-line or in Jenkins script console):
org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true
As noted, this still works to some extent for pipelines using `def` variables, but not as well.
Please give it a try and provide feedback.
I wonder how many people are here because of lack of proper matrix jobs in Pipeline.
This problem has to be resolved one way or another that works for all currently accepted forms of Jenkinsfile. As pointed out in a previous comment there is a limit to how much you can refactor your Jenkinsfile before Declarative Pipelines are just plain unusable.
What happened to the solution hinted at over a year ago that was then:
not quite ready to demo or announce something (a few more pieces need to fall into place
Are they still waiting to fall into place, over a year later?
Unfortunately, that was hopeful thinking on Sam's part and he is not longer working on this. Solving this completely is a huge undertaking. If it were easy, it would be done already.
As to matrix, have you taken a look at the 1.5.0-beta1?
bitwiseman But huge undertaking or not, this is a severely limiting (show-stopping in fact) factor. At some point somebody will have done all of the refactoring into a library that is possible and still hit this problem. What is the solution/recommendation for that person?
As for 1.5.0-beta1, no, I have not. 1.5.0-beta1 of what exactly? Is there a high-level changelog somewhere highlighting what's going to be new/fixed in it?
This issue is three years old and has never really been addressed with anything other than vague marketing-speak and nothing definitively helpful for people who listened to the screams from Jenkins to migrate to pipelines, only to discover all the limitations. But you can happily pay Cloudbees for enterprise support and more expensive add-ons which still won't solve your problems.
Give it up, find another solution.
Got hit with this limitation as well. Nuts. What is the workaround here?
bitwiseman your comment you are saing to set a JVM argument.
I've added it to my Jenkins instance, and from the Jenkins script console I see that JAVA_OPTS variable is being populated like this:
JAVA_OPTS=-Dorg.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true
This is sufficient to verify that the script splitting is enabled? Because I haven't seen any differences (my declarative pipeline is still failing).
Another question you are talking about version 1.4 of which plugin?
BR,
Alessio
moglimcgrath Good news is that you can combine both types.
Our project now looks like
node('node') { stage('stage 1') { ... } node('node') { stage('stage 2') { ... } ... pipeline { agent {... } stages { ... } } ... node('node') { stage('stage n-1') { ... } node('node') { stage('stage n') { ... }
eplodn1 huumm, interesting.
Ive already asked this over on https://issues.jenkins-ci.org/browse/JENKINS-56500 so apologies for duplication.
our code is predominantly in /vars. Templated pipelines with each stage broken out into global vars. Jenkinsfile passes pipeline params. We really liked alot of what declarative pipelines offer, but the file size issue is a pain.
At this point i'm starting to break logic out into classes /scr, and a move to scripted pipelines. Is a move to scripted the right option? If i break logic out into classes and use with declarative will i still hit size limit?
See the attached pipeline, here or over on #56500. If you're facing the same issue, then classes or no classes, it will still hit the limit.
eplodn1 thanks for the replies.
yes ive tested with attached pipeline and it reproduces the same issue we are seeing with our templated pipeline using shared libs.
I tried out bitwiseman fix with the plugin update and JAVA_OPTS variable set but no joy
We original didnt really want to move away from declarative but whatever is the best option we will make needed changes.
assuming when you say "classes or no classes, it will still hit the limit." are you refering to using classes combined with declarative.
Would a move to scripted pipelines, broken out into shared libs be an option or will it be more of the same? I dont want to refactor and find out after
eplodn1
Um, why would you do that? That will definitely limit the effectiveness of script splitting from JENKINS-56500.
See the attached pipeline, here or over on #56500. If you're facing the same issue, then classes or no classes, it will still hit the limit.
I also am unclear on what you mean with this comment. Could you give an example?
Finally, I'm uploading an example of a pipeline helped by script splitting.
bitwiseman I think the point being made, and demonstrated with JenkinsCodeTooLarge.groovy is that even a pipeline that contains nothing except pipeline structure and calls to library functions can blow the limit on size. At some point, as the above pipeline demonstrates, you have factored out as much as you can and still blow the limit.
The limit is the problem here, not how much of a pipeline has been factored away in to a library. The latter is just a band-aid, postponing of the inevitable and frankly a wasted investment if you are going to have to end up scrapping the whole thing at some point and moving to an entirely new solution that won't have such inevitable fatal limits.
Hopefully the newly available matrix feature will help some people out, but there will still be people with big pipelines that are un-matrixable.
The latter is just a band-aid, postponing of the inevitable and frankly a wasted investment if you are going to have to end up scrapping the whole thing at some point and moving to an entirely new solution that won't have such inevitable fatal limits.
I'm not understanding your statement here. There is no way to make there not be some point at which this limit is hit - it is part of the Java class file binary format. You can hit this while writing any Java program. You don't hit it because of the structure of Java encourages code practices that make it unlikely.
The Script_Splitting.groovy shows that script splitting addresses this issue for Declarative Pipelines that don't use variables (which is best practice). It is effectively the same as JenkinsCodeTooLarge.groovy
but without the variable declaration. Is there still a point at which you may hit the size limit? Yes, however, it is over 1000 stages (that's where I stopped), and even higher for matrix-generated stages. At which point hitting the issue isn't "inevitable" but rather highly unlikely.
How big of a pipeline are you trying to run?
If what you mean to say is "Well, I use variables so this doesn't help me", I understand your frustration. If you have bandwidth do contribute a solution, I'd love to chat with you about it.
I will investigate if/how this helps the next time we hit the limit.
Hi bitwiseman
I have been able to take your sample pipelines (Script_Splitting.groovy and Script_Splittingx10.groovy)
I can reproduce the issue “Method Code too large”
When I enable "SCRIPT_SPLITTING_TRANSFORMATION=true" the 2 pipelines you provided run successfully.
When I add Script_Splitting.groovy to a shared Lib under /var, add shared library under jenkins config sys,
Create a mock app with Jenkinsfile which consumes pipeline, and setup a Multibranch job I reproduce “Method Code too large”
Hi,
My config & setup:
- Jenkins ver. 2.190.3
- Declarative pipelines
- Pipeline jobs with groovy pipeline script of 591 lines and 39 jobs to build are failing with "General error during class generation: Method code too large!" (Files with 396 lines are fine)
I've a Job DSL logic in place reading configuration files to create "pipeline code" (+ jobs and so on...) that are stored in variables in the Job DSL groovy scripts. And they are then used when creating the Jenkins pipeline jobs.
So, the pipeline code are created "on the fly" with Job DSL groovy scripts.
In the repo I do have a pipeline code TEMPLATE file. So, read that into the Job DSL groovy code and then do some edit/replacing and store the final
pipeline code in an internal variable. So, the pipeline code are only stored in groovy variables, not in any file on disk. So, they cannot be handled as static files.
This infrastructure works very well in other pipeline setups with less (less stages...) number of Jenkins jobs and pipeline code lines. This issue came as a surprise for me when creating this new setup with bigger pipelines. :-|
What to do?
I've seen references to https://jenkins.io/doc/book/pipeline/shared-libraries/#defining-declarative-pipelines.
But I doubt that I can update any code in files in the shared libraries from Job DSL groovy code...? (Comparing to what's done today - storing final pipeline code in groovy variable)
Anyone, any suggestion on the way forward? Or is the only way to split into more pipeline jobs and pipeline code files?
I don't want to make any big changes to the Job DSL logic that's already in place and works fine for today's smaller pipeline setups!
Just adding weight in the hope that this may be addressed sooner rather than later.
My objective has been to breakdown the legacy Jenkins jobs to run various steps as parallel stages for efficiency, with minimal change to the core scripts (which I have parameterised to accommodate either serial/parallel execution)
- DSL pipeline held in SCM (Git) consists of 632 lines
- builds five legacy nested Jenkins jobs as "explicitly numbered" Primary Stages:
- Prepare Environment called 7 times (8 stages: 1 serial + 5 parallel + 1 serial to merge report)
- Generate Tests (based on historical samples) with a conditional alternative stage to accommodate a re-run that required recycling processed data (i.e. 2 stages)
- Verify Clean Environment (optional) in parallel with stage 2 (5 stages: 1 serial + 2 parallel + 1 serial to merge report)
- Exec Generated Tests (3 stages: 2 parallel same tests on two baselines)
- Compare Results (7 stages: 1 serial + 4 parallel + 1 serial to merge report)
Currently, this amounts to 25 stages including the five overhead stages to handle the parallelisation. I have evolved to this state gradually, and only after I parallelised the 1st stage, did the "too big" problem appear. Also, I had expected to improve performance and visibility further by splitting Primary Stage 4 into 10+ parallel stages.
I am thinking that the best way forward may involve breaking the jobs into three levels (instead of two) by promoting the five Primary Stages as nested pipelines.
I accept that this regression testing exercise may not be the norm for most but any advice/help would be appreciated on a pragmatic way forward.
Thanks, Alan
I've been rewriting a scripted pipeline to declarative to reduce the complexity and readability but have also run into this same issue with what I'd consider "a typical use-case" of Jenkins. I'm trying to reduce the size but still up against the limit.
I understand this is probably not an easy fix but some assurance that this will be fixed in a future release would be helpful.
This same behavior is seen with scripted pipelines and it can be worked around - with increased pain as the functional complexity of a pipeline grows. Apparently this is due to a core jvm limitation of 64K methods on compiled byte-code. Which is unfortunate in a code-generation world. Rather than spending a ton of time working around this it would be really nice just to make the limit 10 times as big...
It would also be helpful to have a little more insight into the contributors to this. For example jenkins scripts, whether declarative or pipeline, often have substantial blocks of text directly scripting the node (e.g. bash or whatever). Does the size of such scripting reflect directly on groovy/java code size or are each of these treated as an opaque data blob who's size doesn't really matter? Is there some logging that enables me to see the current 'method size?'
One does have to wonder. Is groovy really the proper vehicle for defining pipelines then?
I just ran into this migrating from one orchestration multijob + multiple freestyle jobs to one pipeline (declarative plus matrix) on Jenkins 2.226. I have multiple stages ( build / test / deploy ) with matrices inside them ( build on x y, test on x y z, deploy on x y ). My Jenkinsfile is 587 lines.
FTR: I'm hitting this with a "not-so-big" (and no matrix) pipeline, ~800 lines, it includes a few separated stages:
- Build on Linux (and unit tests)
- Build on Windows (and unit tests)
- QA (spotbugs, checkstyle, etc)
- Security analysis
- Integration tests
- Release
amuniz Scripted? (or “Declarative” with script blocks?) It is unknown whether a general fix is feasible for Scripted. Would likely require a redesign of the CPS transformer, which is possible in principle but this is one of the most difficult areas of Jenkins to edit.
I've said this before in this thread, but as I keep getting notifications about new comments in this issue from people who refuse to admin their pipelines design sucks - I have prepared this detailed walkthrough.
Using this technique, I've been able to run 100 stages in both scripted and declarative mode before I'm hitting this issue. I didn't tried the workaround by bitwiseman which might improve Declarative one even further. I want to emphasize on the fact if you are having even half of that many stages - you are doing CICD wrong. You need to fix your process. Jenkins just happens to be a first bottleneck you've faced down that path. That discussion can get really philosophical as we will need to properly redefine what's CI and what's CD, what is a Pipeline and why Jenkins is not a Cron with web-interface. I really have no desire to be doing it here.
Exception might be matrix jobs, but, even then I'm not so sure though I admit there might be a valid use case with that many stages in that space. But even then - execute your scripted pipeline in chunks (details below) - and there is no limits at all, I've been able to run a pipeline with 10000 stages! Thought then my Jenkins fails to render that many stages in UI. But more on that later.
Now, getting into the right way of doing Jenkins.
First and foremost - your Jenkinsfile, no matter where it stored, must be small and simple. It doesn't have to tell WHAT to do, nor define any stages. All that is implementation details that you want to hide from your users.
An example of such a Jenkinsfile:
library 'method-code-too-large-demo'
loopStagesScripted(100)
Note it doesn't matter if you're going to use scripted or declarative pipelines at this point. Here, you just collecting a user input. In my example I have just one - a number that defines how many stages I want in my pipeline. In real world example it might be any input you need from the user - type of project, platform version, any package/dependency details, etc. Just collect that input in any form and shape you want and pass it to your library. In my example a demo library lives here https://github.com/llibicpep/method-code-too-large-demo and loopStagesScripted is a step I have defined in it.
Now, it is up to the library to read the user input, do whatever calculations, and generate your pipeline on the fly and then execute it. But the trick is - the pipeline is just a skeleton, defines the stages and does not actually performs any steps. For the steps it will fall back to the library again. Resulting pipeline from that Jenkinsfile will be looking like this:
stage('Stage 1') { podTemplate(yaml: getPod(1)) { node(POD_LABEL) { doSomethingBasedOnStageNameOrWhatever(1) } } } stage('Stage 2') { podTemplate(yaml: getPod(2)) { node(POD_LABEL) { doSomethingBasedOnStageNameOrWhatever(2) } } } ...
Note in my example, intentionally to increase complexity of my pipeline to demonstrate that everything is possible, I am using Kubernetes plugin and I fallback to the library for my Pod definition calculation based on the user input too. So, my pipeline body doesn't really have much. Once the library generated the pipeline string (and you can be as creative as you want with the ways you go about user input and templating - I had some examples in this issue previously) - it uses the step evaluate to execute it. The actual steps lives in the library under doSomethingBasedOnStageNameOrWhatever, both step name and it's input may be coming from templating layer to do actual something.
I wanted to emphasize on the fact that I didn't do my pipelines that way to work around this particular issue. Proper abstraction layers for stages (interfaces) and steps (implementation) just helps me keep my pretty complex CICD code in a good shape and order. It's readable, easy to understand, and also easily testable (both unit and integration testing).
Like I said, I've been able to run 100 stages that way before it fails. Even if you really need more, which I doubt, you can execute that pipeline in chunks - for instance, each stage separately. There is no limit if you do it that way, I've run 10000 stages like that and didn't face Method code too large issue (though I did faced other issues like my Jenkins fails to render that many stages in Web). An example Jenkinsfile:
library 'method-code-too-large-demo'
loopStagesScriptedInChunks(10000)
If you look into the library code, you'll see all it does is just call evaluate for each stage separately. There is a downside for this approach - Jenkins will not know all the stages in your pipeline ahead of time, so in UI stages will be popping up as they gets executed.
Now, Declarative pipeline:
library 'method-code-too-large-demo'
loopStagesDeclarative(250)
It will use the same technique as loopStagesScripted, except that the body of the pipeline generated will be Declarative style. It will get executed same way via evaluate, and will result into something like:
pipeline { agent none stages { stage('Stage 1') { agent { kubernetes { yaml getPod(1) } } steps { doSomethingBasedOnStageNameOrWhatever(1) } } stage('Stage 2') { agent { kubernetes { yaml getPod(2) } } steps { doSomethingBasedOnStageNameOrWhatever(2) } } ... } }
I hope whoever really wanted a solution - get it now. And whoever wants Jenkins to accommodate for their failures and maintain artificial and invalid use case - I'm really sorry for you.
I've run into the same problem. I have a large (200+kb) pipeline script that is generated from our legacy build specification language.
My generated code is broken up into stages, with each stage running a potentially large parallel pipeline operation, where each platform of a given target is built.
After some googling about the underlying cause, I attempted to break it up by having each stage be defined as a method and calling the method rather than just having all the code in one giant block, but it didn't help.
As this will impact a lot of people trying to migrate to the Jenkins pipeline, and may make them throw up their hands due to the error being vague and unhelpful (which I realize is not the fault of Jenkins, but the underlying Java/Groovy architecture), it might be good to have some specific guides on how to deal with this.