Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-37984

org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed: General error during class generation: Method code too large! error in pipeline Script

    XMLWordPrintable

Details

    Description

      Note from the Maintainers

      There is partial fix for this for Declarative pipelines in pipeline-model-definition-plugin v1.4.0 and later, significantly improved in v1.8.4.  Due to the extent to which it change how pipelines are executed it is turned off by default.  It can be turned on by setting a JVM property (either on the command-line or in Jenkins script console):

      org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true 

      As noted, this still works best with a Jenkinsfile with pipeline directive as the only root item in the file.
      Since v1.8.2 this workaround reports an informative error for pipelines using `def` variables before the pipeline directive. Add a @Field annotation to those declaration.
      This workaround generally does NOT work if the pipeline directive inside a shared library method. If this is a scenario you want, please come join the pipeline authoring SIG and we can discuss.

      Please give it a try and provide feedback. 

      Hi,

      We are getting below error in Pipeline which has some 495 lines of groovy code. Initially we assumed that one of our methods has an issue but once we remove any 30-40 lines of Pipeline groovy, this issue is not coming.

      Can you please suggest a quick workaround. It's a blocker for us.

      org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
      General error during class generation: Method code too large!
      
      java.lang.RuntimeException: Method code too large!
      	at groovyjarjarasm.asm.MethodWriter.a(Unknown Source)
      	at groovyjarjarasm.asm.ClassWriter.toByteArray(Unknown Source)
      	at org.codehaus.groovy.control.CompilationUnit$16.call(CompilationUnit.java:815)
      	at org.codehaus.groovy.control.CompilationUnit.applyToPrimaryClassNodes(CompilationUnit.java:1053)
      	at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:591)
      	at org.codehaus.groovy.control.CompilationUnit.processPhaseOperations(CompilationUnit.java:569)
      	at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:546)
      	at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:298)
      	at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:268)
      	at groovy.lang.GroovyShell.parseClass(GroovyShell.java:688)
      	at groovy.lang.GroovyShell.parse(GroovyShell.java:700)
      	at org.jenkinsci.plugins.workflow.cps.CpsGroovyShell.reparse(CpsGroovyShell.java:67)
      	at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.parseScript(CpsFlowExecution.java:410)
      	at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.start(CpsFlowExecution.java:373)
      	at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:213)
      	at hudson.model.ResourceController.execute(ResourceController.java:98)
      	at hudson.model.Executor.run(Executor.java:410)
      
      1 error
      
      	at org.codehaus.groovy.control.ErrorCollector.failIfErrors(ErrorCollector.java:310)
      	at org.codehaus.groovy.control.CompilationUnit.applyToPrimaryClassNodes(CompilationUnit.java:1073)
      	at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:591)
      	at org.codehaus.groovy.control.CompilationUnit.processPhaseOperations(CompilationUnit.java:569)
      	at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:546)
      	at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:298)
      	at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:268)
      	at groovy.lang.GroovyShell.parseClass(GroovyShell.java:688)
      	at groovy.lang.GroovyShell.parse(GroovyShell.java:700)
      	at org.jenkinsci.plugins.workflow.cps.CpsGroovyShell.reparse(CpsGroovyShell.java:67)
      	at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.parseScript(CpsFlowExecution.java:410)
      	at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.start(CpsFlowExecution.java:373)
      	at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:213)
      	at hudson.model.ResourceController.execute(ResourceController.java:98)
      	at hudson.model.Executor.run(Executor.java:410)
      Finished: FAILURE
      

      Attachments

        1. errorIncomaptiblewithlocalvar.txt
          8 kB
        2. java.png
          java.png
          294 kB
        3. JenkinsCodeTooLarge.groovy
          45 kB
        4. Script_Splitting.groovy
          44 kB
        5. Script_Splittingx10.groovy
          519 kB

        Issue Links

          Activity

            pmcnab Pete McNab added a comment - - edited

            I've run into the same problem. I have a large (200+kb) pipeline script that is generated from our legacy build specification language.

            My generated code is broken up into stages, with each stage running a potentially large parallel pipeline operation, where each platform of a given target is built.

            After some googling about the underlying cause, I attempted to break it up by having each stage be defined as a method and calling the method rather than just having all the code in one giant block, but it didn't help.

            As this will impact a lot of people trying to migrate to the Jenkins pipeline, and may make them throw up their hands due to the error being vague and unhelpful (which I realize is not the fault of Jenkins, but the underlying Java/Groovy architecture), it might be good to have some specific guides on how to deal with this.

            pmcnab Pete McNab added a comment - - edited I've run into the same problem. I have a large (200+kb) pipeline script that is generated from our legacy build specification language. My generated code is broken up into stages, with each stage running a potentially large parallel pipeline operation, where each platform of a given target is built. After some googling about the underlying cause, I attempted to break it up by having each stage be defined as a method and calling the method rather than just having all the code in one giant block, but it didn't help. As this will impact a lot of people trying to migrate to the Jenkins pipeline, and may make them throw up their hands due to the error being vague and unhelpful (which I realize is not the fault of Jenkins, but the underlying Java/Groovy architecture), it might be good to have some specific guides on how to deal with this.
            jglick Jesse Glick added a comment -

            Surprised that breaking it up into separate methods did not help. Other than that, I have no suggestions offhand.

            jglick Jesse Glick added a comment - Surprised that breaking it up into separate methods did not help. Other than that, I have no suggestions offhand.
            pmcnab Pete McNab added a comment -

            Yeah it's kind of puzzling – as part of my testing I had reduced the number of stages significantly.

            So a version that worked had 4 stages with no methods and it was about 1720 lines.

            With the full set of stages (27), the number of lines is about 5300 lines. Breaking it up into methods had no method larger than about 450 lines, but it wouldn't run.

            I also went in and removed the 4 stages that had worked, which put the entire script down to ~3600 lines broken out into methods, and that wouldn't run either.

            pmcnab Pete McNab added a comment - Yeah it's kind of puzzling – as part of my testing I had reduced the number of stages significantly. So a version that worked had 4 stages with no methods and it was about 1720 lines. With the full set of stages (27), the number of lines is about 5300 lines. Breaking it up into methods had no method larger than about 450 lines, but it wouldn't run. I also went in and removed the 4 stages that had worked, which put the entire script down to ~3600 lines broken out into methods, and that wouldn't run either.
            pmcnab Pete McNab added a comment - - edited

            Whoops. Well cancel what I said. When I generated the code with what I intended to be methods I was actually defining closures attached to a variable. When I did it properly as methods it's ok with it.

            So more specifically, for those of you who find this, what I used to have was:

            stage foo
            parallel([
             ... giant list of maps ...
            ])
            

            What I changed it to was:

            stage foo
            def build_foo() {
              parallel([
                 ...giant list of maps...
              ])}
            build_foo()
            
            pmcnab Pete McNab added a comment - - edited Whoops. Well cancel what I said. When I generated the code with what I intended to be methods I was actually defining closures attached to a variable. When I did it properly as methods it's ok with it. So more specifically, for those of you who find this, what I used to have was: stage foo parallel([ ... giant list of maps ... ]) What I changed it to was: stage foo def build_foo() { parallel([ ...giant list of maps... ])} build_foo()
            jglick Jesse Glick added a comment -

            Sounds like a good candidate for a self-answer.

            jglick Jesse Glick added a comment - Sounds like a good candidate for a self-answer .
            pmcnab Pete McNab added a comment -

            I made one – I still think this case should be touched on in the Pipeline docs. I don't think searching Jira or StackOverflow is a substitute for documentation

            pmcnab Pete McNab added a comment - I made one – I still think this case should be touched on in the Pipeline docs. I don't think searching Jira or StackOverflow is a substitute for documentation
            jglick Jesse Glick added a comment -

            Sure, that is why this is still open.

            jglick Jesse Glick added a comment - Sure, that is why this is still open.
            phillan philippe lançon added a comment - - edited

            problem seems to be linked with the size all the script but I remarked it's linked also according number and size of stages.
            A workaround consists to declare a function called in some big stages.

            before :

            stage ('build') {
            ....
            if (.....) {
            } else {
            }
            .....
            }

            workaround

            def build() {
            ....
            if (.....) {
            } else {
            }
            .....
            }
            stage ('build') {
            build()
            }

            phillan philippe lançon added a comment - - edited problem seems to be linked with the size all the script but I remarked it's linked also according number and size of stages. A workaround consists to declare a function called in some big stages. before : stage ('build') { .... if (.....) { } else { } ..... } workaround def build() { .... if (.....) { } else { } ..... } stage ('build') { build() }
            abayer Andrew Bayer added a comment -

            So this is happening because of how CPS transformation works - everything's getting wrapped in a single method behind the scenes, and that's ending up being too large. The best answer we've got is to move things out into shared libraries.

            abayer Andrew Bayer added a comment - So this is happening because of how CPS transformation works - everything's getting wrapped in a single method behind the scenes, and that's ending up being too large. The best answer we've got is to move things out into shared libraries.
            llibicpep Dee Kryvenko added a comment -

            Any workarounds? My scenario is quite specific, we threat Jenkinsfile as some sort of a JIT. No one working with it directly, it rather being generated automatically based on something else. It is essential for the debugging purposes to have a big picture perspective on how the resulting Jenkinsfile going to look like, and there is no benefits into splitting it or moving to the shared library (again this is not for humans and not made by humans).

            We need a way to increase this limit.

            llibicpep Dee Kryvenko added a comment - Any workarounds? My scenario is quite specific, we threat Jenkinsfile as some sort of a JIT. No one working with it directly, it rather being generated automatically based on something else. It is essential for the debugging purposes to have a big picture perspective on how the resulting Jenkinsfile going to look like, and there is no benefits into splitting it or moving to the shared library (again this is not for humans and not made by humans). We need a way to increase this limit.
            ruudp Ruud P added a comment - - edited

            i use as "work a round" a script blok, for me it was a lot of work to change it but the result was smaller code witch is always good

               stages {
                stage('parallel stages') {
                  steps {
                    script {

            in my case i have 200-300 parallel  stages

            i generate the parallel stages with a single function and i execute more as 200-300 stages in parallel, this is very dynamic as this amount of stages depends on user input. So i use now script block to generate all the stages and execute the stages instead of using declarative Pipeline.

             

            ruudp Ruud P added a comment - - edited i use as "work a round" a script blok, for me it was a lot of work to change it but the result was smaller code witch is always good    stages {     stage('parallel stages') {       steps {         script { in my case i have 200-300 parallel  stages i generate the parallel stages with a single function and i execute more as 200-300 stages in parallel, this is very dynamic as this amount of stages depends on user input. So i use now script block to generate all the stages and execute the stages instead of using declarative Pipeline.  
            llibicpep Dee Kryvenko added a comment - - edited

            The thing is I actually don't want to hide this code to the library. My library already does it's thing, I have a custom DSL syntax that my library goes through and generates resulting Jenkinsfile. And I follow TDD, so I got tests, and my tests basically checking the resulting Jenkinsfile vs given my own DSL. Hiding parts of Jenkinsfile into the library going to ruin the whole idea of testing for my project. Again the Jenkinsfile for me is kind of JIT, it's not a code to being read or created by human.

            llibicpep Dee Kryvenko added a comment - - edited The thing is I actually don't want to hide this code to the library. My library already does it's thing, I have a custom DSL syntax that my library goes through and generates resulting Jenkinsfile. And I follow TDD, so I got tests, and my tests basically checking the resulting Jenkinsfile vs given my own DSL. Hiding parts of Jenkinsfile into the library going to ruin the whole idea of testing for my project. Again the Jenkinsfile for me is kind of JIT, it's not a code to being read or created by human.
            thelamer Ryan Kuba added a comment - - edited

            I am running into this, example file: 

            https://pastebin.com/raw/fnKnYiMq

            If I rip out some env declaration blocks like: 

                     script{
                       env.CI_IMAGE = env.DEV_DOCKERHUB_IMAGE
                       env.CI_TAGS = env.EXT_RELEASE + '-pkg-' + env.PACKAGE_TAG + '-dev-' + env.COMMIT_SHA
                       env.CI_META_TAG = env.EXT_RELEASE + '-pkg-' + env.PACKAGE_TAG + '-dev-' + env.COMMIT_SHA
                     }
            

            It will compile up and run, I already tried to move every script section out to functions, but still get Method to Large. 

             

            This is only ~800 lines and even if you compiled this text 10x over it would not be at the 64k java limit. 

            I am trying to understand how the workflow is interpreted that could result in this limit being hit.  

            thelamer Ryan Kuba added a comment - - edited I am running into this, example file:  https://pastebin.com/raw/fnKnYiMq If I rip out some env declaration blocks like:  script{ env.CI_IMAGE = env.DEV_DOCKERHUB_IMAGE env.CI_TAGS = env.EXT_RELEASE + '-pkg-' + env.PACKAGE_TAG + '-dev-' + env.COMMIT_SHA env.CI_META_TAG = env.EXT_RELEASE + '-pkg-' + env.PACKAGE_TAG + '-dev-' + env.COMMIT_SHA } It will compile up and run, I already tried to move every script section out to functions, but still get Method to Large.    This is only ~800 lines and even if you compiled this text 10x over it would not be at the 64k java limit.  I am trying to understand how the workflow is interpreted that could result in this limit being hit.  

            +1

            it is a real headache

            about 800-900 lines of code

            could you please enforce fixing this issue?

            slavik334 Viachaslau Kabak added a comment - +1 it is a real headache about 800-900 lines of code could you please enforce fixing this issue?
            sz2804 Szymon Surudo added a comment -

            Hello Guys,

            I am trying to find a workaround but after some testing I am not able to determine how Jenkinsfile is being parsed in case of Scripted Pipeline. For me it seems that whole file is going to JVM at once, thus gives this limitation of 800-900 lines of code like mentioned above. I tried to artificially split the code into different node blocks which uses the same runners like:

            node('master'){ stage stage stage }
            node('master'){ stage stage stage }
            node('master'){ stage stage stage }

            but it makes no difference. Is there any way to modify code structure so it would be compiled in chunks? Am I able to load some other Jenkinsfiles dynamically? Or maybe it is possible to move some code to shared groovy libraries (but then I would need to call http request plugin from libraries, which I don't know whether is possible)?

            I also did some additional, maybe even stupid test to check how many instructions is too much to compile and I was shocked a bit, when I saw ArrayIndexOutOfBoundsException when I had something above 400 print's invoked in the code divided into 3 stages. Is it that the instructions are held on stack and then send to JVM? How come declarative pipeline is so easy to be split and scripted version is not?

            I would very much appreciate any workaround to move on with dev.

            Best Regards,

            Szymon

            sz2804 Szymon Surudo added a comment - Hello Guys, I am trying to find a workaround but after some testing I am not able to determine how Jenkinsfile is being parsed in case of Scripted Pipeline. For me it seems that whole file is going to JVM at once, thus gives this limitation of 800-900 lines of code like mentioned above. I tried to artificially split the code into different node blocks which uses the same runners like: node('master'){ stage stage stage } node('master'){ stage stage stage } node('master'){ stage stage stage } but it makes no difference. Is there any way to modify code structure so it would be compiled in chunks? Am I able to load some other Jenkinsfiles dynamically? Or maybe it is possible to move some code to shared groovy libraries (but then I would need to call http request plugin from libraries, which I don't know whether is possible)? I also did some additional, maybe even stupid test to check how many instructions is too much to compile and I was shocked a bit, when I saw ArrayIndexOutOfBoundsException when I had something above 400 print's invoked in the code divided into 3 stages. Is it that the instructions are held on stack and then send to JVM? How come declarative pipeline is so easy to be split and scripted version is not? I would very much appreciate any workaround to move on with dev. Best Regards, Szymon
            sz2804 Szymon Surudo added a comment -

            It may sound obvious, however it worked for me when I extracted some methods of reusable code in Jenkinsfile and those are compiled in such a way that I can put more code into those. I don't know where my fixation came from, but I thought that methods are only available in declarative pipeline whereas in scripted pipeline it should be moved to for instance shared libraries. So now my code structure looks like this:

            node('master'){
               stage('setup')

            { some logic    method1()    method2()    }

               stage('cleanup')

            { some logic    method2()    method3()    }

            }

            def method1() {}
            def method2() {}
            def method3() {}

            sz2804 Szymon Surudo added a comment - It may sound obvious, however it worked for me when I extracted some methods of reusable code in Jenkinsfile and those are compiled in such a way that I can put more code into those. I don't know where my fixation came from, but I thought that methods are only available in declarative pipeline whereas in scripted pipeline it should be moved to for instance shared libraries. So now my code structure looks like this: node('master'){    stage('setup') { some logic    method1()    method2()    }    stage('cleanup') { some logic    method2()    method3()    } } def method1() {} def method2() {} def method3() {}

            I'm starting to run into this issue myself when my pipeline reaches about 800 lines. I've been creating helper function outside of the "pipeline" brackets and that's helping but I find myself still running into this issue more than I'd like.

            I can't help it. Having all the new nice features makes me write longer more useful pipelines.

            henryborchers Henry Borchers added a comment - I'm starting to run into this issue myself when my pipeline reaches about 800 lines. I've been creating helper function outside of the "pipeline" brackets and that's helping but I find myself still running into this issue more than I'd like. I can't help it. Having all the new nice features makes me write longer more useful pipelines.
            llibicpep Dee Kryvenko added a comment - - edited

            800 lines of code in one file sounds bad in any language. It definitely needs a refactoring and shared libraries.

            Just to be clear - in my case above I experienced that issue while already having my libraries, and it was due the way I designed these libraries (they treat Jenkinsfile concept as some sort of JIT, they basically doing some calculations based on the input and spits out resulting long Jenkinsfile that I then eval() ed). I solved my case by splitting what I eval() into chunks exchanging the data through the context singleton object in my library (surprisingly singleton instances were not per Jenkins master JVM but per library instance, i.e. per individual build). So technically my case wasn't even related to Jenkins at all. I was sending too long string into eval() method and it was legitimately (accordingly to JVM) giving me a finger. Just to give an example, my chunks would look like:

            getContext('stages')['Test Stage'] = {
                echo 'hi'
            }
            
            getContext('stages')['Second Test Stage'] = {
              echo 'hi again'
            timestamps {
                ansiColor("xterm") {
                    stage('Test Stage', getContext('stages')['Test Stage'])
                    stage('Second Test Stage', getContext('stages')['Second Test Stage'])
                }
            }
            

            Given that I think this issue may be closed now.

            llibicpep Dee Kryvenko added a comment - - edited 800 lines of code in one file sounds bad in any language. It definitely needs a refactoring and shared libraries. Just to be clear - in my case above I experienced that issue while already having my libraries, and it was due the way I designed these libraries (they treat Jenkinsfile concept as some sort of JIT, they basically doing some calculations based on the input and spits out resulting long Jenkinsfile that I then eval() ed). I solved my case by splitting what I eval() into chunks exchanging the data through the context singleton object in my library (surprisingly singleton instances were not per Jenkins master JVM but per library instance, i.e. per individual build). So technically my case wasn't even related to Jenkins at all. I was sending too long string into eval() method and it was legitimately (accordingly to JVM) giving me a finger. Just to give an example, my chunks would look like: getContext( 'stages' )[ 'Test Stage' ] = { echo 'hi' } getContext( 'stages' )[ 'Second Test Stage' ] = { echo 'hi again' }  timestamps { ansiColor( "xterm" ) { stage( 'Test Stage' , getContext( 'stages' )[ 'Test Stage' ]) stage( 'Second Test Stage' , getContext( 'stages' )[ 'Second Test Stage' ]) } } Given that I think this issue may be closed now.

            llibicpep, My pipeline is long because it contains more than just unit testing. It's a complete DevOps pipeline with optional stages depending on the situation. I have sequential stages, parallel stages and most of these have a post section that cleans up or depends on the success or failure of the stage. It's very declarative and pretty easy to reason with so there really shouldn't be a reason to refactored the code.

            Jenkins has become very very powerful tool with the Pipeline DSL with a lot of very useful features. It's a shame when I when I can't use a feature because my pipeline contains too many lines already.

            henryborchers Henry Borchers added a comment - llibicpep , My pipeline is long because it contains more than just unit testing. It's a complete DevOps pipeline with optional stages depending on the situation. I have sequential stages, parallel stages and most of these have a post section that cleans up or depends on the success or failure of the stage. It's very declarative and pretty easy to reason with so there really shouldn't be a reason to refactored the code. Jenkins has become very very powerful tool with the Pipeline DSL with a lot of very useful features. It's a shame when I when I can't use a feature because my pipeline contains too many lines already.
            llibicpep Dee Kryvenko added a comment - - edited

            henryborchers, complexity of your pipeline is completely irrelevant. Just to draw a parallel, the fact that you're creating complex Enterprise product does not justify you to put all it's code in a single file, or even in a single method, does it?

            Parts of your pipeline has to be a reusable functions in the shared library, so your actual jenkins file should consist only simple statements, something like

            doThis()
            if (itsTrue()) {
              doThat()
            }
            

            That applies only for scripted pipelines, of course. Accordingly to https://jenkins.io/doc/book/pipeline/shared-libraries/#defining-declarative-pipelines it sounds like declarative pipelines even more limited than I thought, so I'm happy I don't use 'em.

            llibicpep Dee Kryvenko added a comment - - edited henryborchers , complexity of your pipeline is completely irrelevant. Just to draw a parallel, the fact that you're creating complex Enterprise product does not justify you to put all it's code in a single file, or even in a single method, does it? Parts of your pipeline has to be a reusable functions in the shared library, so your actual jenkins file should consist only simple statements, something like doThis() if (itsTrue()) { doThat() } That applies only for scripted pipelines, of course. Accordingly to https://jenkins.io/doc/book/pipeline/shared-libraries/#defining-declarative-pipelines it sounds like declarative pipelines even more limited than I thought, so I'm happy I don't use 'em.
            llibicpep Dee Kryvenko added a comment -

            Just a quick searching in google reveal there is actually an open source project doing something good enough to demonstrate what I mean:
            https://github.com/fabric8io/fabric8-pipeline-library
            https://github.com/fabric8io/fabric8-jenkinsfile-library

            llibicpep Dee Kryvenko added a comment - Just a quick searching in google reveal there is actually an open source project doing something good enough to demonstrate what I mean: https://github.com/fabric8io/fabric8-pipeline-library https://github.com/fabric8io/fabric8-jenkinsfile-library

            henryborchers, complexity of your pipeline is completely irrelevant.

            Couldn't agree more

            Just to draw a parallel, the fact that you're creating complex Enterprise product does not justify you to put all it's code in a single file, or even in a single method, does it?

            It's not a complex Enterprise product. Quite the opposite. I don't have much support so I have to automate as much DevOps stuff myself. Because I have very little resources and stakeholders that require a lot, I'm making the most of the resources I can get my hands on.

            Parts of your pipeline has to be a reusable functions in the shared library, so your actual jenkins file should consist only simple statements, something like

            doThis()
            if (itsTrue()) {
              doThat()
            }
            

            That applies only for scripted pipelines, of course. Accordingly to https://jenkins.io/doc/book/pipeline/shared-libraries/#defining-declarative-pipelines it sounds like declarative pipelines even more limited than I thought, so I'm happy I don't use 'em.

             

            I'm happy that you're happy that you don't use declarative pipelines. However, I do. The only limit I've run has been line length.

            Just a quick searching in google reveal there is actually an open source project doing something good enough to demonstrate what I mean:
            https://github.com/fabric8io/fabric8-pipeline-library
            https://github.com/fabric8io/fabric8-jenkinsfile-library

            Yes. I already use shared libraries for somethings already.  I just don't use need it very often.

             

            The only reason my pipelines are so long is because of the declarative style.

             

             

            stage("Run Doctest Tests"){
              when { 
                  equals expected: true, actual: params.TEST_DOCTEST  
              }  
              steps{
                  bat "pipenv run sphinx-build -b doctest docs\\source build\\docs -d build\\docs\\doctrees -v"  
              }
              post{ 
                  always {
                     dir(reports){ 
                         archiveArtifacts artifacts: "doctest.txt"
                     }
                 }
              }
            }
            

             

             

            It's more verbose but I find it highly readable and very easy to maintain. Shared libraries are nice for helpers but they can be a pain for maintain so I keep them simple. 

            henryborchers Henry Borchers added a comment - henryborchers , complexity of your pipeline is completely irrelevant. Couldn't agree more Just to draw a parallel, the fact that you're creating complex Enterprise product does not justify you to put all it's code in a single file, or even in a single method, does it? It's not a complex Enterprise product. Quite the opposite. I don't have much support so I have to automate as much DevOps stuff myself. Because I have very little resources and stakeholders that require a lot, I'm making the most of the resources I can get my hands on. Parts of your pipeline has to be a reusable functions in the shared library, so your actual jenkins file should consist only simple statements, something like doThis() if (itsTrue()) { doThat() } That applies only for scripted pipelines, of course. Accordingly to  https://jenkins.io/doc/book/pipeline/shared-libraries/#defining-declarative-pipelines  it sounds like declarative pipelines even more limited than I thought, so I'm happy I don't use 'em.   I'm happy that you're happy that you don't use declarative pipelines. However, I do. The only limit I've run has been line length. Just a quick searching in google reveal there is actually an open source project doing something good enough to demonstrate what I mean: https://github.com/fabric8io/fabric8-pipeline-library https://github.com/fabric8io/fabric8-jenkinsfile-library Yes. I already use shared libraries for somethings already.  I just don't use need it very often.   The only reason my pipelines are so long is because of the declarative style.     stage( "Run Doctest Tests" ){   when { equals expected: true , actual: params.TEST_DOCTEST  }  steps{   bat "pipenv run sphinx-build -b doctest docs\\source build\\docs -d build\\docs\\doctrees -v"   }   post{ always { dir(reports){ archiveArtifacts artifacts: "doctest.txt" } }   } }     It's more verbose but I find it highly readable and very easy to maintain. Shared libraries are nice for helpers but they can be a pain for maintain so I keep them simple. 
            llibicpep Dee Kryvenko added a comment - - edited

            I don't have much support so I have to automate as much DevOps stuff myself. Because I have very little resources and stakeholders that require a lot, I'm making the most of the resources I can get my hands on.

            That is somewhat twisted conclusion. Shared library as a layer of abstraction helps to maintain and simplify your life, as long as you use it right. Same as the possibility to run unit tests for that code. Otherwise it's just a snowflake, it may look beautiful from a certain angle but that is your first symptom you're getting right there.

            However, it's a pointless discussion, I am just a user the same as you, I do not making that call, but even then I don't see what Jenkins can possibly do as this is a JVM limitation. The Jenkinsfile essentially is Groovy DSL file which ends up being executed as a method. I can imagine they could possibly do some semantic analysis and try to automatically split it into chunks or obfuscate parts of it into a separate functions, but the level of effort needs to be put in it is nowhere close to the potential benefit. So I wouldn't expect any resolution to that issue.

            llibicpep Dee Kryvenko added a comment - - edited I don't have much support so I have to automate as much DevOps stuff myself. Because I have very little resources and stakeholders that require a lot, I'm making the most of the resources I can get my hands on. That is somewhat twisted conclusion. Shared library as a layer of abstraction helps to maintain and simplify your life, as long as you use it right. Same as the possibility to run unit tests for that code. Otherwise it's just a snowflake, it may look beautiful from a certain angle but that is your first symptom you're getting right there. However, it's a pointless discussion, I am just a user the same as you, I do not making that call, but even then I don't see what Jenkins can possibly do as this is a JVM limitation. The Jenkinsfile essentially is Groovy DSL file which ends up being executed as a method. I can imagine they could possibly do some semantic analysis and try to automatically split it into chunks or obfuscate parts of it into a separate functions, but the level of effort needs to be put in it is nowhere close to the potential benefit. So I wouldn't expect any resolution to that issue.

            I have a specific example where I have 174 individual git repos that I want to sync and run Simian and other static code analysis on to check for warnings/errors/duplicate code, etc.

             

            I have even tried breaking the checkout down into multiple stages.  How do I fix this?  I have tried moving it out to a function but that does not work.  I have tried breaking down into multiple (12+) stages and also does not work.

            My total script is 224 lines long (64K).  Not sure this should be too big.

            stephentunney Stephen Tunney added a comment - I have a specific example where I have 174 individual git repos that I want to sync and run Simian and other static code analysis on to check for warnings/errors/duplicate code, etc.   I have even tried breaking the checkout down into multiple stages.  How do I fix this?  I have tried moving it out to a function but that does not work.  I have tried breaking down into multiple (12+) stages and also does not work. My total script is 224 lines long (64K).  Not sure this should be too big.

            Getting same error while using much smaller pipeline, (actually it is delivered as shared library), whole thing (*.groovy) with comments and long property names has just slightly > 64kb. 

             

            I guess when it is translated to one great method it is obfuscated / optimised.

             

            jglick is there a way to monitor the size of transformed pipeline? Some output or custom job to monitor it would be great?

            quas Jakub Pawlinski added a comment - Getting same error while using much smaller pipeline, (actually it is delivered as shared library), whole thing (*.groovy) with comments and long property names has just slightly > 64kb.    I guess when it is translated to one great method it is obfuscated / optimised.   jglick is there a way to monitor the size of transformed pipeline? Some output or custom job to monitor it would be great?
            jglick Jesse Glick added a comment -

            I do not foresee anyone spending much effort on this issue, as that would distract from the goal of moving Pipeline execution out of process, rendering this moot.

            jglick Jesse Glick added a comment - I do not foresee anyone spending much effort on this issue, as that would distract from the goal of moving Pipeline execution out of process, rendering this moot.

            Could you share some details on "moving Pipeline execution out of process", like is this on roadmap somewhere?

            I don't need rendering, would be enough to output pipeline size during the build so I know how far I am from hitting this. Something like this:

               println Jenkins.instance.getPipelineSize('pipeline.groovy').toString()

             

            quas Jakub Pawlinski added a comment - Could you share some details on "moving Pipeline execution out of process", like is this on roadmap somewhere? I don't need rendering, would be enough to output pipeline size during the build so I know how far I am from hitting this. Something like this:    println Jenkins.instance.getPipelineSize('pipeline.groovy').toString()  
            dantran dan tran added a comment -

            we just hit this issue today on our many stages mainly used to run many integration test suites

            dantran dan tran added a comment - we just hit this issue today on our many stages mainly used to run many integration test suites
            svanoort Sam Van Oort added a comment -

            quas It is on the roadmap and has a couple engineers working hard on it (myself included), although we're not quite ready to demo or announce something (a few more pieces need to fall into place).

            svanoort Sam Van Oort added a comment - quas It is on the roadmap and has a couple engineers working hard on it (myself included), although we're not quite ready to demo or announce something (a few more pieces need to fall into place).

            svanoort any updates on this issue? I see nobody assigned to it right now, but I've just hit the same issue on our CI.

            avsej Sergey Avseyev added a comment - svanoort any updates on this issue? I see nobody assigned to it right now, but I've just hit the same issue on our CI.
            jglick Jesse Glick added a comment -

            avsej as discussed above, I doubt anyone is planning to spend time on this.

            jglick Jesse Glick added a comment - avsej as discussed above, I doubt anyone is planning to spend time on this.

            jglick, but last comment a month ago was about working hard on it. So I thought there might be some progress, or at least assignee for the issue.

            avsej Sergey Avseyev added a comment - jglick , but last comment a month ago was about working hard on it. So I thought there might be some progress, or at least assignee for the issue.
            jglick Jesse Glick added a comment -

            No, the last comment was about working on a completely different execution engine that would not suffer from this class of bugs by design.

            jglick Jesse Glick added a comment - No, the last comment was about working on a completely different execution engine that would not suffer from this class of bugs by design.
            naval_gupta01 Naval Gupta added a comment -

            Any update or resolution on this issue.

            Our production deployment is stuck due to this pipeline size restriction. 

            Pipeline scripts fails in case it contains more than 78 build job statements in total. Right now, we are splitting the complete script into smaller pipeline scripts and then running all in predefined sequence, provided a single parallel cannot contain  more than 78 builds, currently

            But its production and due to increase in server list, this count will go beyond 100 or more at some point of time and hence this pipeline job concept wont be of any use.

            Therefore requesting you to expedite on this and provide us the update.

            naval_gupta01 Naval Gupta added a comment - Any update or resolution on this issue. Our production deployment is stuck due to this pipeline size restriction.  Pipeline scripts fails in case it contains more than 78 build job statements in total. Right now, we are splitting the complete script into smaller pipeline scripts and then running all in predefined sequence, provided a single parallel cannot contain  more than 78 builds, currently But its production and due to increase in server list, this count will go beyond 100 or more at some point of time and hence this pipeline job concept wont be of any use. Therefore requesting you to expedite on this and provide us the update.
            petethomson Peter Thomson added a comment -

            Any news on this, like https://issues.jenkins-ci.org/secure/ViewProfile.jspa?name=naval_gupta01 We are also suffering within our Jenkins system due to this issue

            petethomson Peter Thomson added a comment - Any news on this, like https://issues.jenkins-ci.org/secure/ViewProfile.jspa?name=naval_gupta01  We are also suffering within our Jenkins system due to this issue
            jp_eh J P added a comment - - edited

            We are on Jenkins 2.150.2 and Groovy Plugin 2.60.  We started getting this error when we upgraded Groovy to 2.6.1.  Our pipeline is about 690 lines.  It is a workflow to deploy 2 application components to staging servers and then to prod servers.  Please let us know when we can get an update on this.

            jp_eh J P added a comment - - edited We are on Jenkins 2.150.2 and Groovy Plugin 2.60.  We started getting this error when we upgraded Groovy to 2.6.1.  Our pipeline is about 690 lines.  It is a workflow to deploy 2 application components to staging servers and then to prod servers.  Please let us know when we can get an update on this.

            I'm running into this more and more. I've put what makes sense into shared libraries and squeeze more and more into ugly "helper functions" outside of the pipeline but it's getting really hard.

             

            I really hope there is a better solution coming soon because I'm pretty sure that my coworkers are working with HR to have a talk with me about how much swearing I've been doing.

            henryborchers Henry Borchers added a comment - I'm running into this more and more. I've put what makes sense into shared libraries and squeeze more and more into ugly "helper functions" outside of the pipeline but it's getting really hard.   I really hope there is a better solution coming soon because I'm pretty sure that my coworkers are working with HR to have a talk with me about how much swearing I've been doing.
            wim Wim Gaethofs added a comment -

            Even when putting everything into shared libraries, my pipeline code is still 700+ lines. 

            I had to split my code up into 2 jenkins jobs because of this issue. 

            wim Wim Gaethofs added a comment - Even when putting everything into shared libraries, my pipeline code is still 700+ lines.  I had to split my code up into 2 jenkins jobs because of this issue. 

            svanoort, it's been a few months since you teased us about something on your roadmap that would alleviate this issue. Any chance you could provide a little more info or at least tease us enough to wet our appetite? I'm running against the limit way too often these days. 

            henryborchers Henry Borchers added a comment - svanoort , it's been a few months since you teased us about something on your roadmap that would alleviate this issue. Any chance you could provide a little more info or at least tease us enough to wet our appetite? I'm running against the limit way too often these days. 
            jglick Jesse Glick added a comment -

            I am not aware of any plans to work on this issue. I tend to doubt it is fixable in the current Pipeline execution engine (workflow-cps), beyond better reporting the error. The known workaround is to split long blocks of code into distinct methods.

            jglick Jesse Glick added a comment - I am not aware of any plans to work on this issue. I tend to doubt it is fixable in the current Pipeline execution engine ( workflow-cps ), beyond better reporting the error. The known workaround is to split long blocks of code into distinct methods.

            jglick, a few post ago you mentioned something about creating a different execution engine that wouldn't have this issue. Anything you can point me to so that I can follow the progress or, at least, is anything interesting that you can tease to keep me hopeful that the future looks bright?

            I have been putting any "steps block" that is more than 1 line into into helper functions but I'm still running into issues. 

            I'm sorry if I come off as nagging. I just really love Jenkins. The declarative pipeline has been my one of favorite tools which I used for everything I build, 

            henryborchers Henry Borchers added a comment - jglick , a few post ago you mentioned something about creating a different execution engine that wouldn't have this issue. Anything you can point me to so that I can follow the progress or, at least, is anything interesting that you can tease to keep me hopeful that the future looks bright? I have been putting any "steps block" that is more than 1 line into into helper functions but I'm still running into issues.  I'm sorry if I come off as nagging. I just really love Jenkins. The declarative pipeline has been my one of favorite tools which I used for everything I build, 
            jglick Jesse Glick added a comment -

            henryborchers I believe that work is now inactive. I am afraid I have no particular advice for now beyond:

            • If at all possible, remove logic from Pipeline Groovy code and move it into external processes. We see a lot of people with these very complex calculations that could better have been done in some Python script or whatever in the workspace, so that the Pipeline part would boil down to just node {sh './something'}.
            • When not possible (for example because the Pipeline script is actually required to determine how to configure steps such as parallel or Jenkins publishers), split up long functions (including the implicit main function at top level of a Jenkinsfile) into shorter functions.
            jglick Jesse Glick added a comment - henryborchers I believe that work is now inactive. I am afraid I have no particular advice for now beyond: If at all possible, remove logic from Pipeline Groovy code and move it into external processes. We see a lot of people with these very complex calculations that could better have been done in some Python script or whatever in the workspace, so that the Pipeline part would boil down to just node {sh './something' }. When not possible (for example because the Pipeline script is actually required to determine how to configure steps such as parallel or Jenkins publishers), split up long functions (including the implicit main function at top level of a Jenkinsfile ) into shorter functions.
            llibicpep Dee Kryvenko added a comment -

            Any `Jenkinsfile` of any complexity can be shortened to just one line that looks like `doStuff()`. Does it makes sense to do it that way? Probably not but hopefully gives an idea as to where to move with this issue.

            llibicpep Dee Kryvenko added a comment - Any `Jenkinsfile` of any complexity can be shortened to just one line that looks like `doStuff()`. Does it makes sense to do it that way? Probably not but hopefully gives an idea as to where to move with this issue.
            llibicpep Dee Kryvenko added a comment - - edited

            Let me re-phrase and sum-up some of the questions in that thread:

            I'm writing off my application and even though I moved bunch of the stuff into separate methods I still keep all of my high-level app working flow in my `main()` method and it is still to big and java complains about that. Can you fix it plz?

            I think the ticket may now be closed.

            llibicpep Dee Kryvenko added a comment - - edited Let me re-phrase and sum-up some of the questions in that thread: I'm writing off my application and even though I moved bunch of the stuff into separate methods I still keep all of my high-level app working flow in my `main()` method and it is still to big and java complains about that. Can you fix it plz? I think the ticket may now be closed.

            Why you are talking about moving stuff into functions, and mention flows? This problem also reproducing in declarative pipelines, which do not have code at all. They just describes steps and each one is single-line and invokes built-in function.

            avsej Sergey Avseyev added a comment - Why you are talking about moving stuff into functions, and mention flows? This problem also reproducing in declarative pipelines, which do not have code at all. They just describes steps and each one is single-line and invokes built-in function.
            llibicpep Dee Kryvenko added a comment -

            Well if a declarative pipeline is so big that it won't fit into the limit - clearly the definition of a step needs some re-thinking. Sounds like a layer of abstraction required to wrap a multiple commonly-reusable steps into one to reduce the amount.

            llibicpep Dee Kryvenko added a comment - Well if a declarative pipeline is so big that it won't fit into the limit - clearly the definition of a step needs some re-thinking. Sounds like a layer of abstraction required to wrap a multiple commonly-reusable steps into one to reduce the amount.

            are you saying that all steps I'm using in my pipeline also counted and inlined into result class object? stuff like zip(...), archiveartifacts, etc.?

            avsej Sergey Avseyev added a comment - are you saying that all steps I'm using in my pipeline also counted and inlined into result class object? stuff like zip(...) , archiveartifacts , etc.?
            llibicpep Dee Kryvenko added a comment -

            Entire Jenkinsfile effectively becomes a body to some sort of `eval` function under the hood (for the sake of simplification let's forget about CPS and stuff). It's no difference if you split the code into the methods and still keep the methods in Jenkinsfile. Methods needs to be moved into shared library or otherwise made available in the Jenkinsfile scope.
            No matter declarative or scripted pipeline - it's just directives that are effectively a Groovy closures. Standard java rules still applies no matter what.

            llibicpep Dee Kryvenko added a comment - Entire Jenkinsfile effectively becomes a body to some sort of `eval` function under the hood (for the sake of simplification let's forget about CPS and stuff). It's no difference if you split the code into the methods and still keep the methods in Jenkinsfile. Methods needs to be moved into shared library or otherwise made available in the Jenkinsfile scope. No matter declarative or scripted pipeline - it's just directives that are effectively a Groovy closures. Standard java rules still applies no matter what.
            jglick Jesse Glick added a comment -

            This problem also reproducing in declarative pipelines

            If true then it may be feasible to provide a fix in the pipeline-model-definition plugin, even without a general fix for Scripted.

            jglick Jesse Glick added a comment - This problem also reproducing in declarative pipelines If true then it may be feasible to provide a fix in the pipeline-model-definition plugin, even without a general fix for Scripted.
            eplodn1 efo plo added a comment -

            We get this error in a purely declarative pipeline just by the sheer amount of 

            stage {  when { ... } 
                agent { ... } 
                steps { ... } 
                post { success { ... } failure { ... } cleanup { ... } }
            }

            Add parallels, rinse, repeat — Method code too large.

            Our steps {} are already a call to a single function.

            This would be great if we could produce the stages in a separate function or file, but so far we can't find anything with regards to how to possibly go about it.

            eplodn1 efo plo added a comment - We get this error in a purely declarative pipeline just by the sheer amount of  stage {  when { ... }     agent { ... }     steps { ... }      post { success { ... } failure { ... } cleanup { ... } } } Add parallels, rinse, repeat — Method code too large. Our steps {} are already a call to a single function. This would be great if we could produce the stages in a separate function or file, but so far we can't find anything with regards to how to possibly go about it.
            wim Wim Gaethofs added a comment -

            I'm seeing the same thing as 'efo plo' in declarative pipeline.  Only defining the flow in the pipeline with stages, and calling shared libraries and functions to execute code. 

            I now have 41 stages inside my pipeline{}. Adding just one more stage gives me this error. 

            wim Wim Gaethofs added a comment - I'm seeing the same thing as 'efo plo' in declarative pipeline.  Only defining the flow in the pipeline with stages, and calling shared libraries and functions to execute code.  I now have 41 stages inside my pipeline{}. Adding just one more stage gives me this error. 
            jglick Jesse Glick added a comment -

            It would be helpful if someone observing this issue in Declarative could create a new issue in the pipeline-model-definition-plugin component (Link ed to this one) attaching a minimal, self-contained Jenkinsfile reproducing the error in a specified version of Jenkins and the workflow-cps (Pipeline: Groovy) and pipeline-model-definition (Pipeline: Declarative) plugins. I am not making any promises but that would at least improve the odds of a targeted fix for that case. (Bonus points for a pull request to jenkinsci/pipeline-model-definition-plugin adding an @Ignore d test case demonstrating the error.)

            jglick Jesse Glick added a comment - It would be helpful if someone observing this issue in Declarative could create a new issue in the pipeline-model-definition-plugin component ( Link ed to this one) attaching a minimal, self-contained Jenkinsfile reproducing the error in a specified version of Jenkins and the workflow-cps ( Pipeline: Groovy ) and pipeline-model-definition ( Pipeline: Declarative ) plugins. I am not making any promises but that would at least improve the odds of a targeted fix for that case. (Bonus points for a pull request to jenkinsci/pipeline-model-definition-plugin adding an @Ignore d test case demonstrating the error.)
            kgiloo kgiloo added a comment -

            you can only move your code at the script level in classes but you can't do this at the stage(s) level, which restricts drastically any refactoring.

            i have a parallel job running on 6 different platforms, "same" stages, but i have to copy-paste stages in a monster pipeline, which ends up into a 1130 lines call method.

            this is really a deal breaker in pure declarative.

            kgiloo kgiloo added a comment - you can only move your code at the script level in classes but you can't do this at the stage (s) level, which restricts drastically any refactoring. i have a parallel job running on 6 different platforms, "same" stages, but i have to copy-paste stages in a monster pipeline, which ends up into a 1130 lines call method. this is really a deal breaker in pure declarative.
            llibicpep Dee Kryvenko added a comment -

            you can only move your code at the script level in classes but you can't do this at the stage(s) level, which restricts drastically any refactoring.

            That's not entirely true. What I'm going to say might be too advance level for many users, but pipelines that hitting this limit to me sounds pretty advance. Shared library can provide your own custom declarative syntax (or imperative if you prefer). Shared library then, being a layer of abstraction, can calculate resulting Jenkinsfile (either declarative or scripted) based on your input and send it to `eval()`.

            llibicpep Dee Kryvenko added a comment - you can only move your code at the script level in classes but you can't do this at the stage(s) level, which restricts drastically any refactoring. That's not entirely true. What I'm going to say might be too advance level for many users, but pipelines that hitting this limit to me sounds pretty advance. Shared library can provide your own custom declarative syntax (or imperative if you prefer). Shared library then, being a layer of abstraction, can calculate resulting Jenkinsfile (either declarative or scripted) based on your input and send it to `eval()`.
            kgiloo kgiloo added a comment -

            llibicpep i am afraid i do not get your point.

            scope is pure declarative. hence, i doubt you can wrap any code but inside script { } out of your main pipeline.

            if you can do so, then please post a snippet of code, thank you.

            kgiloo kgiloo added a comment - llibicpep i am afraid i do not get your point. scope is pure declarative. hence, i doubt you can wrap any code but inside script { } out of your main pipeline. if you can do so, then please post a snippet of code, thank you.
            eplodn1 efo plo added a comment - - edited

            Added a sample Jenkinsfile reproducing the problem.
            It is also available at https://pastebin.com/eDVppFjm

            org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
            General error during class generation: Method code too large!
            
            java.lang.RuntimeException: Method code too large!
            	at groovyjarjarasm.asm.MethodWriter.a(Unknown Source)
            	at groovyjarjarasm.asm.ClassWriter.toByteArray(Unknown Source)
            

            Any and all ideas with regards to how this may be refactored are more than welcome.

            eplodn1 efo plo added a comment - - edited Added a sample Jenkinsfile reproducing the problem. It is also available at https://pastebin.com/eDVppFjm org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed: General error during class generation: Method code too large! java.lang.RuntimeException: Method code too large! at groovyjarjarasm.asm.MethodWriter.a(Unknown Source) at groovyjarjarasm.asm.ClassWriter.toByteArray(Unknown Source) Any and all ideas with regards to how this may be refactored are more than welcome.
            llibicpep Dee Kryvenko added a comment - - edited

            The very basic example is all there https://jenkins.io/doc/book/pipeline/shared-libraries/#defining-declarative-pipelines. You don't define parts of the pipeline but entire pipeline inside of the library. Example is pretty primitive, my idea is that the library can implement a number of custom closures to make available a following syntax:

            myPipeline("platform") {
              foo "bar"
              deploy "baz"
            }

            In other words making the input Jenkinsfile nothing more than a metadata file explaining what's inside of this repository. Shared library in turn can do calculations based on input above and pretty much do a lot of templating work in order to produce real Jenkinsfile to sent it to `eval()` or execute inline.

            Another example https://jenkins.io/blog/2017/10/02/pipeline-templates-with-shared-libraries/

            llibicpep Dee Kryvenko added a comment - - edited The very basic example is all there https://jenkins.io/doc/book/pipeline/shared-libraries/#defining-declarative-pipelines . You don't define parts of the pipeline but entire pipeline inside of the library. Example is pretty primitive, my idea is that the library can implement a number of custom closures to make available a following syntax: myPipeline( "platform" ) {   foo "bar"   deploy "baz" } In other words making the input Jenkinsfile nothing more than a metadata file explaining what's inside of this repository. Shared library in turn can do calculations based on input above and pretty much do a lot of templating work in order to produce real Jenkinsfile to sent it to `eval()` or execute inline. Another example https://jenkins.io/blog/2017/10/02/pipeline-templates-with-shared-libraries/
            eplodn1 efo plo added a comment -

            In my understanding, that would do nothing except moving the error to the stage of compiling the shared library method defining this pipeline. But if you could attach a pastebin that would compile that pipeline, I'd be more than happy to be proven wrong. Please note that while I simply replicated the same stage over and over, in our practice every stage is calling different functions (all of which are already defined in a shared library).

            eplodn1 efo plo added a comment - In my understanding, that would do nothing except moving the error to the stage of compiling the shared library method defining this pipeline. But if you could attach a pastebin that would compile that pipeline, I'd be more than happy to be proven wrong. Please note that while I simply replicated the same stage over and over, in our practice every stage is calling different functions (all of which are already defined in a shared library).
            llibicpep Dee Kryvenko added a comment -

            Well, if you do something like this but resulting Jenkinsfile produced is still in declarative style, you probably don't solve anything as the size of it will still be huge. But the point is - you just came up with your own declarative style that is simpler, smaller, and better suits your project or organization as being designed specifically for it. So you probably don't need declarative style being produced any more as it becomes nothing more than an intermediate format between the shared library and Jenkins (sort of JIT in Java world, really just a language that machine talking to a machine). There's no point in all that declarative style limits. In turn, if produced result is in scripted format - can be split into chunks and eval() separately, for instance per stage.

            I guess my point is - declarative style just sucks. From my experience it works only for small and simple projects despite how hard it's being advertised an opposite.

            llibicpep Dee Kryvenko added a comment - Well, if you do something like this but resulting Jenkinsfile produced is still in declarative style, you probably don't solve anything as the size of it will still be huge. But the point is - you just came up with your own declarative style that is simpler, smaller, and better suits your project or organization as being designed specifically for it. So you probably don't need declarative style being produced any more as it becomes nothing more than an intermediate format between the shared library and Jenkins (sort of JIT in Java world, really just a language that machine talking to a machine). There's no point in all that declarative style limits. In turn, if produced result is in scripted format - can be split into chunks and eval() separately, for instance per stage. I guess my point is - declarative style just sucks. From my experience it works only for small and simple projects despite how hard it's being advertised an opposite.
            eplodn1 efo plo added a comment -

            As it is, I am not looking to come up with my own declarative style or a specific custom-fit design for my organization. I am just trying to do my job fighting this "Method code too large!" error.

            eplodn1 efo plo added a comment - As it is, I am not looking to come up with my own declarative style or a specific custom-fit design for my organization. I am just trying to do my job fighting this "Method code too large!" error.
            llibicpep Dee Kryvenko added a comment -

            Well then you may be trying to apply a wrong tool for the job. Try some commercial offers maybe.

            llibicpep Dee Kryvenko added a comment - Well then you may be trying to apply a wrong tool for the job. Try some commercial offers maybe.
            eplodn1 efo plo added a comment -

            Thanks for your opinion, llibicpep!

            eplodn1 efo plo added a comment - Thanks for your opinion, llibicpep !

            Well then you may be trying to apply a wrong tool for the job. Try some commercial offers maybe.

            I understand your be frustrated but comments like this are not very helpful.

            I guess my point is - declarative style just sucks. From my experience it works only for small and simple projects despite how hard it's being advertised an opposite.

            I couldn't disagree more.  It's super easy to read and understand. It has a clean syntax. It's well thought out. It's expandable with scripts if need be. It's so easy to to look through the blue ocean to see why something has failed. It's freaking brilliant!!!

             

            The declarative pipeline is the main reason that I use Jenkins and why I advocate for it wherever I go.  Did I mention it's brilliant? The biggest issue is that it is so easy to use, that I just want to use it more and more until I get smacked in the face with "Method code too large!.  It's a little bit like the use of electricity over the last century. Most electronic devices have become more efficient throughout the decades and use less electricity as their designs have improved. However, we keep finding new uses for electricity at a faster rate than we can make them more efficient.  We end up needing more and more power plants as a result, not fewer.

            I feel the same way with Jenkins and especially the declarative pipeline. I've been getting the lines in my steps into fewer and fewer lines. I make shared-libraries for my common code and "helper script functions" outside of the pipeline block. However, I keep finding more and more ways to find problems in my code early with careful placement of "sequential and parallel stages" and "when and post" blocks.  I end up with lots of stages with one or two lines in the steps block. Then all of a sudden, I run into "Method code too large!"... And then I become a sad developer...

            henryborchers Henry Borchers added a comment - Well then you may be trying to apply a wrong tool for the job. Try some commercial offers maybe. I understand your be frustrated but comments like this are not very helpful. I guess my point is - declarative style just sucks. From my experience it works only for small and simple projects despite how hard it's being advertised an opposite. I couldn't disagree more.  It's super easy to read and understand. It has a clean syntax. It's well thought out. It's expandable with scripts if need be. It's so easy to to look through the blue ocean to see why something has failed. It's freaking brilliant!!!   The declarative pipeline is the main reason that I use Jenkins and why I advocate for it wherever I go.  Did I mention it's brilliant? The biggest issue is that it is so easy to use, that I just want to use it more and more until I get smacked in the face with "Method code too large!.  It's a little bit like the use of electricity over the last century. Most electronic devices have become more efficient throughout the decades and use less electricity as their designs have improved. However, we keep finding new uses for electricity at a faster rate than we can make them more efficient.  We end up needing more and more power plants as a result, not fewer. I feel the same way with Jenkins and especially the declarative pipeline. I've been getting the lines in my steps into fewer and fewer lines. I make shared-libraries for my common code and "helper script functions" outside of the pipeline block. However, I keep finding more and more ways to find problems in my code early with careful placement of "sequential and parallel stages" and "when and post" blocks.  I end up with lots of stages with one or two lines in the steps block. Then all of a sudden, I run into "Method code too large!"... And then I become a sad developer...
            llibicpep Dee Kryvenko added a comment -

            I understand your be frustrated but comments like this are not very helpful.

            Well sometimes truth is not helpful at all. What's your point? Maybe you have a solution or even a PR? Enlighten us.

            I couldn't disagree more.

            You didn't seem to read my suggestion careful enough. Declarative approach, indeed, is the future. Particular Jenkinsfile declarative implementation sucks, though. Because you kind of use declarative style but you still have to imperatively define steps. What a bummer!

            If you read careful enough what I said, you'll see what I opted in for is rather to have a small meta data file in the repository that explains facts about the app inside, so that the library can deal with it and render a pipeline for it. This render body is much easier to work with when it's scripted style, and the fact that it's scripted doesn't matter because users don't directly deal with it.

            llibicpep Dee Kryvenko added a comment - I understand your be frustrated but comments like this are not very helpful. Well sometimes truth is not helpful at all. What's your point? Maybe you have a solution or even a PR? Enlighten us. I couldn't disagree more. You didn't seem to read my suggestion careful enough. Declarative approach, indeed, is the future. Particular Jenkinsfile declarative implementation sucks, though. Because you kind of use declarative style but you still have to imperatively define steps. What a bummer! If you read careful enough what I said, you'll see what I opted in for is rather to have a small meta data file in the repository that explains facts about the app inside, so that the library can deal with it and render a pipeline for it. This render body is much easier to work with when it's scripted style, and the fact that it's scripted doesn't matter because users don't directly deal with it.
            ashsy_009 Ashwin Y added a comment -

            hello,
            Is there any solution or workaround for this?
            This is open from last 3 months !! Please provide some solution to this......

            ashsy_009 Ashwin Y added a comment - hello, Is there any solution or workaround for this? This is open from last 3 months !! Please provide some solution to this......
            siva_baisani sivanarayana baisani added a comment - - edited

            I have a strange scenario with this error. So I have jobs which are huge in size the 1st job is about 1200 lines long and it works completely fine and the 2nd job is about 850 lines and it got this error. And by removing some lines from the code worked fine. 

            siva_baisani sivanarayana baisani added a comment - - edited I have a strange scenario with this error. So I have jobs which are huge in size the 1st job is about 1200 lines long and it works completely fine and the 2nd job is about 850 lines and it got this error. And by removing some lines from the code worked fine. 

            There is a work around, of sorts.  There is a fixed limit to the size of the groovy.  No way around that limit, but you can do things to minimize what's in groovy.

            Eliminate clutter.   For us, that was stuff like echo statements, and unused variables.  You might also be able to alter the jenkins job's configuration rather than setting options in the groovy.

             

            A better answer is to shift large blocks to scripts.  For example, if you have a bunch of "sh" or "bat" commands in a row, put them in a script file, then invoke the script from groovy. 

             

            Good luck.  This limit should still be fixed (or raised).  You just cannot get to enterprise worthy pipelines with it.

             

            jbennett20912 Jeffrey Bennett added a comment - There is a work around, of sorts.  There is a fixed limit to the size of the groovy.  No way around that limit, but you can do things to minimize what's in groovy. Eliminate clutter.   For us, that was stuff like echo statements, and unused variables.  You might also be able to alter the jenkins job's configuration rather than setting options in the groovy.   A better answer is to shift large blocks to scripts.  For example, if you have a bunch of "sh" or "bat" commands in a row, put them in a script file, then invoke the script from groovy.    Good luck.  This limit should still be fixed (or raised).  You just cannot get to enterprise worthy pipelines with it.  

            Hello,

            this is causing troubles also on my company...I think that a solution could be moving bash commands to an external file or using groovy libraries, but during initial development phase of pipeline I usually put all the code in the a single file: it is not maintainable, but it is what I need to quickly develop and test new pipelines.

            I'll be really grateful to Jenkins developers if they can solve this issue

            BR,

            Alessio

            spinus1 Alessio Moscatello added a comment - Hello, this is causing troubles also on my company...I think that a solution could be moving bash commands to an external file or using groovy libraries, but during initial development phase of pipeline I usually put all the code in the a single file: it is not maintainable, but it is what I need to quickly develop and test new pipelines. I'll be really grateful to Jenkins developers if they can solve this issue BR, Alessio

            Hello,

            I'm just fiddling with groovy shared libraries, it helped me reducing code size a  bit, but now I'm pretty stuck since I cannot reduce it anymore, so each line I add is causing the issue...maybe I'm not using shared libraries in the correct way? Does anyone have some hints on this?

             

            BR,

            Alessio

            spinus1 Alessio Moscatello added a comment - Hello, I'm just fiddling with groovy shared libraries, it helped me reducing code size a  bit, but now I'm pretty stuck since I cannot reduce it anymore, so each line I add is causing the issue...maybe I'm not using shared libraries in the correct way? Does anyone have some hints on this?   BR, Alessio
            mueck Carsten Mück added a comment -

            Workaround:
            Move your code into different scripts inside an extra Jenkinsfile Repository (or in your Build-Repository), check those files out, load them into variables and call the code as function.
            Example
            Jenkinsfile (Main executed)

            node(){
                checkout Jenkinsfile-repo
                def HelperScript = load("path/to/helperscript.groovy")
                Helperscript.DoYourWork()
            }

            helperscript.groovy

            #!groovy
            def DoYourWork(){
                 //Do something that doesn't work because too much to load in initial script
            }
            //Important statement for loading the script!!!
            return this

            As the script is not loaded initially it can compile the main Jenkinsfile.

            Hope that works for more people then just me

            mueck Carsten Mück added a comment - Workaround: Move your code into different scripts inside an extra Jenkinsfile Repository (or in your Build-Repository), check those files out, load them into variables and call the code as function. Example Jenkinsfile (Main executed) node(){ checkout Jenkinsfile-repo def HelperScript = load( "path/to/helperscript.groovy" ) Helperscript.DoYourWork() } helperscript.groovy #!groovy def DoYourWork(){ //Do something that doesn't work because too much to load in initial script } //Important statement for loading the script!!! return this As the script is not loaded initially it can compile the main Jenkinsfile. Hope that works for more people then just me
            eplodn1 efo plo added a comment -

            mueck If one can't define a pipeline inside `DoYourWork()` function — which I suspect is not the case, though I haven't tried it — this does not solve the original issue.

            eplodn1 efo plo added a comment - mueck If one can't define a pipeline inside `DoYourWork()` function — which I suspect is not the case, though I haven't tried it — this does not solve the original issue.
            mueck Carsten Mück added a comment -

            I am not sure what you mean with defining a pipeline. If it means to load your properties, then you can still do that in the main Jenkinsfile, or is yours so huge that this alone invokes the?? java.lang.RuntimeException: Method code too large! ??exception?

            Just to clarify this issue.
            When Jenkins compiles the Code to start the build, it fails because the code is too large. Code that is loaded "on runtime" is not loaded in that moment, means you can make you compiling code smaller by sourcing it out into a file that you load while building. The compiled code can still have a decent size though, means everything you need to "define" your pipeline. 
            I know that this works as I encountered the Method code too large exception and am working around it with my "solution"

            mueck Carsten Mück added a comment - I am not sure what you mean with defining a pipeline. If it means to load your properties, then you can still do that in the main Jenkinsfile, or is yours so huge that this alone invokes the?? java.lang.RuntimeException: Method code too large! ??exception? Just to clarify this issue. When Jenkins compiles the Code to start the build, it fails because the code is too large. Code that is loaded "on runtime" is not loaded in that moment, means you can make you compiling code smaller by sourcing it out into a file that you load while building. The compiled code can still have a decent size though, means everything you need to "define" your pipeline.  I know that this works as I encountered the Method code too large exception and am working around it with my "solution"
            eplodn1 efo plo added a comment -

            mueck Please see the attachment, if you can work it out I will be more than happy.

            eplodn1 efo plo added a comment - mueck Please see the attachment, if you can work it out I will be more than happy.

            Carsten's approach seems to suggest that the behavior of all node() elements can be offloaded to subordinate groovy scripts. Rather than having the developer do it manually, wouldn't it be nice if that's what Jenkins did for you. In other words, jenkins parses the developer provided jenkinsfile, and manufactures script-with-load and the loaded-scripts that it then works off of. This would then be a true solution.

            jbennett20912 Jeffrey Bennett added a comment - Carsten's approach seems to suggest that the behavior of all node() elements can be offloaded to subordinate groovy scripts. Rather than having the developer do it manually, wouldn't it be nice if that's what Jenkins did for you. In other words, jenkins parses the developer provided jenkinsfile, and manufactures script-with-load and the loaded-scripts that it then works off of. This would then be a true solution.
            bitwiseman Liam Newman added a comment - - edited

            This PR will address this issue for declarative scripts that do not use "def" variables before the "pipeline {}" block.  

            https://github.com/jenkinsci/pipeline-model-definition-plugin/pull/355

             

            jbennett20912 

            The PR above does basically what you describe - but only for declarative pipelines that do not use `def`s. Read on for more info. 

             

            mueck

            (and anyone else interested):

            When Jenkins compiles the Code to start the build, it fails because the code is too large. Code that is loaded "on runtime" is not loaded in that moment, means you can make you compiling code smaller by sourcing it out into a file that you load while building. The compiled code can still have a decent size though, means everything you need to "define" your pipeline. 

            The underlying issue is that the Java classfile specification limits methods to 64k of byte code.  In a related point it also limits the number of constants in a single class to 64k constant items (not size but number of items).  

            NOTE: as far I can see this is a question of not writing the output to file - the limitation is on the binary structure of byte stream.  Regardless of whether you load a class from File on disk or from byte stream in memory, if you try to create a Java class that violates these limits it will fail.

            By default, the entire Jenkinsfile is run as part of script initialization - one method.  If you break your pipeline up in the multiple methods things get better (for a while), however each new method you make must also not violate method size limit.  Further, eventually your pipeline will encounter another limit - constants per class.  You can work past this by further dividing your pipeline into classes.  

            The problem with dividing into classes is that `def` variables added to root of the script are not accessible from those other classes.  I have not found a solution to that issue, which is why the above PR doesn't address the issue for Declarative pipelines that use `def` variables.  It is possible that we could detect which parts of the pipeline refer to `def` variables and keep those in the same class, but it is involved and likely to be error prone. 

            If we focus on Declarative only, we could add some way to initialize variables in a directive instead of using `def`s, then we declarative would be free to split functions and classes as needed, while still preserving the behavior people need from `def`s. 

            bitwiseman Liam Newman added a comment - - edited This PR will address this issue for declarative scripts that do not use "def" variables before the "pipeline {}" block.   https://github.com/jenkinsci/pipeline-model-definition-plugin/pull/355   jbennett20912   The PR above does basically what you describe - but only for declarative pipelines that do not use `def`s. Read on for more info.    mueck (and anyone else interested): When Jenkins compiles the Code to start the build, it fails because the code is too large. Code that is loaded "on runtime" is not loaded in that moment, means you can make you compiling code smaller by sourcing it out into a file that you load while building. The compiled code can still have a decent size though, means everything you need to "define" your pipeline.  The underlying issue is that the Java classfile specification limits methods to 64k of byte code.  In a related point it also limits the number of constants in a single class to 64k constant items (not size but number of items).   NOTE: as far I can see this is a question of not writing the output to file - the limitation is on the binary structure of byte stream.  Regardless of whether you load a class from File on disk or from byte stream in memory, if you try to create a Java class that violates these limits it will fail. By default, the entire Jenkinsfile is run as part of script initialization - one method.  If you break your pipeline up in the multiple methods things get better (for a while), however each new method you make must also not violate method size limit.  Further, eventually your pipeline will encounter another limit - constants per class.  You can work past this by further dividing your pipeline into classes.   The problem with dividing into classes is that `def` variables added to root of the script are not accessible from those other classes.  I have not found a solution to that issue, which is why the above PR doesn't address the issue for Declarative pipelines that use `def` variables.  It is possible that we could detect which parts of the pipeline refer to `def` variables and keep those in the same class, but it is involved and likely to be error prone.  If we focus on Declarative only, we could add some way to initialize variables in a directive instead of using `def`s, then we declarative would be free to split functions and classes as needed, while still preserving the behavior people need from `def`s. 
            bitwiseman Liam Newman added a comment -

            If you are using Declarative:

            There is partial fix for this in v1.4.0.  Due to the extent to which it change how pipelines are executed it is turned off by default.  It can be turned on by setting a JVM property (either on the command-line or in Jenkins script console):

            org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true  

            As noted, this still works to some extent for pipelines using `def` variables, but not as well. 

             Please give it a try and provide feedback.  

            bitwiseman Liam Newman added a comment - If you are using Declarative: There is partial fix for this in v1.4.0.  Due to the extent to which it change how pipelines are executed it is turned off by default.  It can be turned on by setting a JVM property (either on the command-line or in Jenkins script console): org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true   As noted, this still works to some extent for pipelines using `def` variables, but not as well.   Please give it a try and provide feedback.  

            I wonder how many people are here because of lack of proper matrix jobs in Pipeline.

            This problem has to be resolved one way or another that works for all currently accepted forms of Jenkinsfile.  As pointed out in a previous comment there is a limit to how much you can refactor your Jenkinsfile before Declarative Pipelines are just plain unusable.

            What happened to the solution hinted at over a year ago that was then:

            not quite ready to demo or announce something (a few more pieces need to fall into place

            Are they still waiting to fall into place, over a year later?

            brianjmurrell Brian J Murrell added a comment - I wonder how many people are here because of lack of  proper matrix jobs in Pipeline. This problem has to be resolved one way or another that works for all currently accepted forms of Jenkinsfile .  As pointed out in a previous comment  there is a limit to how much you can refactor your Jenkinsfile before Declarative Pipelines are just plain unusable. What happened to the solution hinted at over a year ago that was then: not quite ready to demo or announce something (a few more pieces need to fall into place Are they still waiting to fall into place, over a year later?
            bitwiseman Liam Newman added a comment -

            brianjmurrell

            Unfortunately, that was hopeful thinking on Sam's part and he is not longer working on this.  Solving this completely is a huge undertaking.  If it were easy, it would be done already. 

            As to matrix, have you taken a look at the 1.5.0-beta1? 

             

             

             

            bitwiseman Liam Newman added a comment - brianjmurrell Unfortunately, that was hopeful thinking on Sam's part and he is not longer working on this.  Solving this completely is a huge undertaking.  If it were easy, it would be done already.  As to matrix, have you taken a look at the 1.5.0-beta1?       
            brianjmurrell Brian J Murrell added a comment - - edited

            bitwiseman But huge undertaking or not, this is a severely limiting (show-stopping in fact) factor.  At some point somebody will have done all of the refactoring into a library that is possible and still hit this problem.  What is the solution/recommendation for that person?

            As for 1.5.0-beta1, no, I have not.  1.5.0-beta1 of what exactly?  Is there a high-level changelog somewhere highlighting what's going to be new/fixed in it?

            brianjmurrell Brian J Murrell added a comment - - edited bitwiseman But huge undertaking or not, this is a severely limiting (show-stopping in fact) factor.  At some point somebody will have done all of the refactoring into a library that is possible and still hit this problem.  What is the solution/recommendation for that person? As for 1.5.0-beta1, no, I have not.  1.5.0-beta1 of what exactly?  Is there a high-level changelog somewhere highlighting what's going to be new/fixed in it?
            pmcnab Pete McNab added a comment -

            This issue is three years old and has never really been addressed with anything other than vague marketing-speak and nothing definitively helpful for people who listened to the screams from Jenkins to migrate to pipelines, only to discover all the limitations.  But you can happily pay Cloudbees for enterprise support and more expensive add-ons which still won't solve your problems.

            Give it up, find another solution.

             

            pmcnab Pete McNab added a comment - This issue is three years old and has never really been addressed with anything other than vague marketing-speak and nothing definitively helpful for people who listened to the screams from Jenkins to migrate to pipelines, only to discover all the limitations.  But you can happily pay Cloudbees for enterprise support and more expensive add-ons which still won't solve your problems. Give it up, find another solution.  
            idanadar Idan Adar added a comment -

            Got hit with this limitation as well. Nuts. What is the workaround here?

            idanadar Idan Adar added a comment - Got hit with this limitation as well. Nuts. What is the workaround here?

            bitwiseman  your comment  you are saing to set a JVM argument.

            I've added it to my Jenkins instance, and from the Jenkins script console I see that JAVA_OPTS variable is being populated like this:

            JAVA_OPTS=-Dorg.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true
            

             

            This is sufficient to verify that the script splitting is enabled? Because I haven't seen any differences (my declarative pipeline is still failing).

            Another question you are talking about version 1.4 of which plugin?

             

            BR,

            Alessio

             

             

            spinus1 Alessio Moscatello added a comment - bitwiseman   your comment   you are saing to set a JVM argument. I've added it to my Jenkins instance, and from the Jenkins script console I see that JAVA_OPTS variable is being populated like this: JAVA_OPTS=-Dorg.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION= true   This is sufficient to verify that the script splitting is enabled? Because I haven't seen any differences (my declarative pipeline is still failing). Another question you are talking about version 1.4 of which plugin?   BR, Alessio    
            moglimcgrath M McGrath added a comment -

            Really need some direction on how to over come this issue.

             

            moglimcgrath M McGrath added a comment - Really need some direction on how to over come this issue.  
            eplodn1 efo plo added a comment -

            moglimcgrath Good news is that you can combine both types.
            Our project now looks like

            node('node') { stage('stage 1') { ... }
            node('node') { stage('stage 2') { ... }
            ...
            pipeline { agent {... }     stages { ... } }
            ...
            node('node') { stage('stage n-1') { ... }
            node('node') { stage('stage n') { ... }
            
            eplodn1 efo plo added a comment - moglimcgrath Good news is that you can combine both types. Our project now looks like node( 'node' ) { stage( 'stage 1' ) { ... } node( 'node' ) { stage( 'stage 2' ) { ... } ... pipeline { agent {... } stages { ... } } ... node( 'node' ) { stage( 'stage n-1' ) { ... } node( 'node' ) { stage( 'stage n' ) { ... }
            moglimcgrath M McGrath added a comment - - edited

            eplodn1 huumm, interesting.

            Ive already asked this over on https://issues.jenkins-ci.org/browse/JENKINS-56500 so apologies for duplication. 

            our code is predominantly in /vars. Templated pipelines with each stage broken out into global vars. Jenkinsfile passes pipeline params. We really liked alot of what declarative pipelines offer, but the file size issue is a pain.

            At this point i'm starting to break logic out into classes /scr, and a move to scripted pipelines. Is a move to scripted the right option? If i break logic out into classes and use with declarative will i still hit size limit?

             

             

            moglimcgrath M McGrath added a comment - - edited eplodn1 huumm, interesting. Ive already asked this over on  https://issues.jenkins-ci.org/browse/JENKINS-56500  so apologies for duplication.  our code is predominantly in /vars. Templated pipelines with each stage broken out into global vars. Jenkinsfile passes pipeline params. We really liked alot of what declarative pipelines offer, but the file size issue is a pain. At this point i'm starting to break logic out into classes /scr, and a move to scripted pipelines. Is a move to scripted the right option? If i break logic out into classes and use with declarative will i still hit size limit?    
            eplodn1 efo plo added a comment -

            See the attached pipeline, here or over on #56500. If you're facing the same issue, then classes or no classes, it will still hit the limit.

            eplodn1 efo plo added a comment - See the attached pipeline, here or over on #56500. If you're facing the same issue, then classes or no classes, it will still hit the limit.
            moglimcgrath M McGrath added a comment - - edited

            eplodn1 thanks for the replies. 

            yes ive tested with attached pipeline and it reproduces the same issue we are seeing with our templated pipeline using shared libs.

            I tried out bitwiseman fix with the plugin update and  JAVA_OPTS variable set but no joy

            We original didnt really want to move away from declarative but whatever is the best option we will make needed changes. 

            assuming when you say  "classes or no classes, it will still hit the limit." are you refering to using classes combined with declarative.

            Would a move to scripted pipelines, broken out into shared libs be an option or will it be more of the same? I dont want to refactor and find out after 

             

             

             

             

            moglimcgrath M McGrath added a comment - - edited eplodn1 thanks for the replies.  yes ive tested with attached pipeline and it reproduces the same issue we are seeing with our templated pipeline using shared libs. I tried out bitwiseman fix with the plugin update and  JAVA_OPTS variable set but no joy We original didnt really want to move away from declarative but whatever is the best option we will make needed changes.  assuming when you say  " classes or no classes, it will still hit the limit. " are you refering to using classes combined with declarative. Would a move to scripted pipelines, broken out into shared libs be an option or will it be more of the same? I dont want to refactor and find out after         
            bitwiseman Liam Newman added a comment -

            eplodn1
            Um, why would you do that? That will definitely limit the effectiveness of script splitting from JENKINS-56500.

            See the attached pipeline, here or over on #56500. If you're facing the same issue, then classes or no classes, it will still hit the limit.

            I also am unclear on what you mean with this comment. Could you give an example?

            Finally, I'm uploading an example of a pipeline helped by script splitting.

            bitwiseman Liam Newman added a comment - eplodn1 Um, why would you do that? That will definitely limit the effectiveness of script splitting from JENKINS-56500 . See the attached pipeline, here or over on #56500. If you're facing the same issue, then classes or no classes, it will still hit the limit. I also am unclear on what you mean with this comment. Could you give an example? Finally, I'm uploading an example of a pipeline helped by script splitting.

            bitwiseman I think the point being made, and demonstrated with JenkinsCodeTooLarge.groovy is that even a pipeline that contains nothing except pipeline structure and calls to library functions can blow the limit on size.  At some point, as the above pipeline demonstrates, you have factored out as much as you can and still blow the limit.

            The limit is the problem here, not how much of a pipeline has been factored away in to a library.  The latter is just a band-aid, postponing of the inevitable and frankly a wasted investment if you are going to have to end up scrapping the whole thing at some point and moving to an entirely new solution that won't have such inevitable fatal limits.

            Hopefully the newly available matrix feature will help some people out, but there will still be people with big pipelines that are un-matrixable.

            brianjmurrell Brian J Murrell added a comment - bitwiseman I think the point being made, and demonstrated with JenkinsCodeTooLarge.groovy  is that even a pipeline that contains nothing except pipeline structure and calls to library functions can blow the limit on size.  At some point, as the above pipeline demonstrates, you have factored out as much as you can and still blow the limit. The limit is the problem here, not how much of a pipeline has been factored away in to a library.  The latter is just a band-aid, postponing of the inevitable and frankly a wasted investment if you are going to have to end up scrapping the whole thing at some point and moving to an entirely new solution that won't have such inevitable fatal limits. Hopefully the newly available matrix feature will help some people out, but there will still be people with big pipelines that are un-matrixable.
            bitwiseman Liam Newman added a comment -

            brianjmurrell

            The latter is just a band-aid, postponing of the inevitable and frankly a wasted investment if you are going to have to end up scrapping the whole thing at some point and moving to an entirely new solution that won't have such inevitable fatal limits.

            I'm not understanding your statement here. There is no way to make there not be some point at which this limit is hit - it is part of the Java class file binary format. You can hit this while writing any Java program. You don't hit it because of the structure of Java encourages code practices that make it unlikely.

            The Script_Splitting.groovy shows that script splitting addresses this issue for Declarative Pipelines that don't use variables (which is best practice). It is effectively the same as JenkinsCodeTooLarge.groovy but without the variable declaration. Is there still a point at which you may hit the size limit? Yes, however, it is over 1000 stages (that's where I stopped), and even higher for matrix-generated stages. At which point hitting the issue isn't "inevitable" but rather highly unlikely.

            How big of a pipeline are you trying to run?

            If what you mean to say is "Well, I use variables so this doesn't help me", I understand your frustration. If you have bandwidth do contribute a solution, I'd love to chat with you about it.

            bitwiseman Liam Newman added a comment - brianjmurrell The latter is just a band-aid, postponing of the inevitable and frankly a wasted investment if you are going to have to end up scrapping the whole thing at some point and moving to an entirely new solution that won't have such inevitable fatal limits. I'm not understanding your statement here. There is no way to make there not be some point at which this limit is hit - it is part of the Java class file binary format. You can hit this while writing any Java program. You don't hit it because of the structure of Java encourages code practices that make it unlikely. The Script_Splitting.groovy shows that script splitting addresses this issue for Declarative Pipelines that don't use variables (which is best practice). It is effectively the same as JenkinsCodeTooLarge.groovy but without the variable declaration. Is there still a point at which you may hit the size limit? Yes, however, it is over 1000 stages (that's where I stopped), and even higher for matrix-generated stages. At which point hitting the issue isn't "inevitable" but rather highly unlikely. How big of a pipeline are you trying to run? If what you mean to say is "Well, I use variables so this doesn't help me", I understand your frustration. If you have bandwidth do contribute a solution, I'd love to chat with you about it.

            I will investigate if/how this helps the next time we hit the limit.

            brianjmurrell Brian J Murrell added a comment - I will investigate if/how this helps the next time we hit the limit.
            moglimcgrath M McGrath added a comment -

            Hi bitwiseman
            I have been able to take your sample pipelines (Script_Splitting.groovy and Script_Splittingx10.groovy)
            I can reproduce the issue “Method Code too large”
            When I enable "SCRIPT_SPLITTING_TRANSFORMATION=true" the 2 pipelines you provided run successfully.
             
            When I add Script_Splitting.groovy to a shared Lib under /var, add shared library under jenkins config sys,
            Create a mock app with Jenkinsfile which consumes pipeline, and setup a Multibranch job I reproduce  “Method Code too large”
             
             

            moglimcgrath M McGrath added a comment - Hi bitwiseman I have been able to take your sample pipelines (Script_Splitting.groovy and Script_Splittingx10.groovy) I can reproduce the issue “Method Code too large” When I enable "SCRIPT_SPLITTING_TRANSFORMATION=true" the 2 pipelines you provided run successfully.   When I add Script_Splitting.groovy to a shared Lib under /var, add shared library under jenkins config sys, Create a mock app with Jenkinsfile which consumes pipeline, and setup a Multibranch job I reproduce  “Method Code too large”    
            rille Richard Olsson added a comment - - edited

            Hi,

            My config & setup:

            • Jenkins ver. 2.190.3
            • Declarative pipelines
            • Pipeline jobs with groovy pipeline script of 591 lines and 39 jobs to build are failing with "General error during class generation: Method code too large!" (Files with 396 lines are fine)

             

            I've a Job DSL logic in place reading configuration files to create "pipeline code" (+ jobs and so on...) that are stored in variables in the Job DSL groovy scripts. And they are then used when creating the Jenkins pipeline jobs.

            So, the pipeline code are created "on the fly" with Job DSL groovy scripts.
            In the repo I do have a pipeline code TEMPLATE file. So, read that into the Job DSL groovy code and then do some edit/replacing and store the final 
            pipeline code in an internal variable. So, the pipeline code are only stored in groovy variables, not in any file on disk. So, they cannot be handled as static files. 

            This infrastructure works very well in other pipeline setups with less (less stages...) number of Jenkins jobs and pipeline code lines. This issue came as a surprise for me when creating this new setup with bigger pipelines. :-|

            What to do?

            I've seen references to https://jenkins.io/doc/book/pipeline/shared-libraries/#defining-declarative-pipelines.

            But I doubt that I can update any code in files in the shared libraries from Job DSL groovy code...?  (Comparing to what's done today - storing final pipeline code in groovy variable)

            Anyone, any suggestion on the way forward? Or is the only way to split into more pipeline jobs and pipeline code files?

            I don't want to make any big changes to the Job DSL logic that's already in place and works fine for today's smaller pipeline setups!

             

            rille Richard Olsson added a comment - - edited Hi, My config & setup: Jenkins ver. 2.190.3 Declarative pipelines Pipeline jobs with groovy pipeline script of 591 lines and 39 jobs to build are failing with "General error during class generation: Method code too large!" (Files with 396 lines are fine)   I've a Job DSL logic in place reading configuration files to create "pipeline code" (+ jobs and so on...) that are stored in variables in the Job DSL groovy scripts. And they are then used when creating the Jenkins pipeline jobs. So, the pipeline code are created "on the fly" with Job DSL groovy scripts. In the repo I do have a pipeline code TEMPLATE file. So, read that into the Job DSL groovy code and then do some edit/replacing and store the final  pipeline code in an internal variable. So, the pipeline code are only stored in groovy variables, not in any file on disk. So, they cannot be handled as static files.  This infrastructure works very well in other pipeline setups with less (less stages...) number of Jenkins jobs and pipeline code lines. This issue came as a surprise for me when creating this new setup with bigger pipelines. :-| What to do? I've seen references to  https://jenkins.io/doc/book/pipeline/shared-libraries/#defining-declarative-pipelines . But I doubt that I can update any code in files in the shared libraries from Job DSL groovy code...?  (Comparing to what's done today - storing final pipeline code in groovy variable) Anyone, any suggestion on the way forward? Or is the only way to split into more pipeline jobs and pipeline code files? I don't want to make any big changes to the Job DSL logic that's already in place and works fine for today's smaller pipeline setups!  
            ironchamp Alan Champion added a comment - - edited

            Just adding weight in the hope that this may be addressed sooner rather than later.

            My objective has been to breakdown the legacy Jenkins jobs to run various steps as parallel stages for efficiency, with minimal change to the core scripts (which I have parameterised to accommodate either serial/parallel execution)

            • DSL pipeline held in SCM (Git) consists of 632 lines
            • builds five legacy nested Jenkins jobs as "explicitly numbered" Primary Stages:
              1. Prepare Environment called 7 times (8 stages: 1 serial + 5 parallel + 1 serial to merge report)
              2. Generate Tests (based on historical samples) with a conditional alternative stage to accommodate a re-run that required recycling processed data (i.e. 2 stages)
              3. Verify Clean Environment (optional) in parallel with stage 2 (5 stages: 1 serial + 2 parallel + 1 serial to merge report)
              4. Exec Generated Tests (3 stages: 2 parallel same tests on two baselines)
              5. Compare Results (7 stages: 1 serial + 4 parallel + 1 serial to merge report)

            Currently, this amounts to 25 stages including the five overhead stages to handle the parallelisation.  I have evolved to this state gradually, and only after I parallelised the 1st stage, did the "too big" problem appear.  Also, I had expected to improve performance and visibility further by splitting Primary Stage 4 into 10+ parallel stages. 

            I am thinking that the best way forward may involve breaking the jobs into three levels (instead of two) by promoting the five Primary Stages as nested pipelines.

            I accept that this regression testing exercise may not be the norm for most but any advice/help would be appreciated on a pragmatic way forward.

            Thanks, Alan

             

            ironchamp Alan Champion added a comment - - edited Just adding weight in the hope that this may be addressed sooner rather than later. My objective has been to breakdown the legacy Jenkins jobs to run various steps as parallel stages for efficiency, with minimal change to the core scripts (which I have parameterised to accommodate either serial/parallel execution) DSL pipeline held in SCM (Git) consists of 632 lines builds five legacy nested Jenkins jobs as "explicitly numbered" Primary Stages: Prepare Environment called 7 times (8 stages: 1 serial + 5 parallel + 1 serial to merge report) Generate Tests (based on historical samples) with a conditional alternative stage to accommodate a re-run that required recycling processed data (i.e. 2 stages) Verify Clean Environment  (optional) in parallel with stage 2 (5 stages: 1 serial + 2 parallel + 1 serial to merge report) Exec Generated Tests (3 stages: 2 parallel same tests on two baselines) Compare Results (7 stages: 1 serial + 4 parallel + 1 serial to merge report) Currently, this amounts to 25 stages including the five overhead stages to handle the parallelisation.  I have evolved to this state gradually, and only after I parallelised the 1st stage, did the "too big" problem appear.  Also, I had expected to improve performance and visibility further by splitting Primary Stage  4 into 10+ parallel stages.  I am thinking that the best way forward may involve breaking the jobs into three levels (instead of two) by promoting the five Primary Stages as nested pipelines. I accept that this regression testing exercise may not be the norm for most but any advice/help would be appreciated on a pragmatic way forward. Thanks, Alan  
            gregturner Greg Turner added a comment -

            I've been rewriting a scripted pipeline to declarative to reduce the complexity and readability but have also run into this same issue with what I'd consider "a typical use-case" of Jenkins.  I'm trying to reduce the size but still up against the limit. 

            I understand this is probably not an easy fix but some assurance that this will be fixed in a future release would be helpful.

            gregturner Greg Turner added a comment - I've been rewriting a scripted pipeline to declarative to reduce the complexity and readability but have also run into this same issue with what I'd consider "a typical use-case" of Jenkins.  I'm trying to reduce the size but still up against the limit.  I understand this is probably not an easy fix but some assurance that this will be fixed in a future release would be helpful.
            sgardell Steven Gardell added a comment - - edited

            This same behavior is seen with scripted pipelines and it can be worked around - with increased pain as the functional complexity of a pipeline grows.  Apparently this is due to a core jvm limitation of 64K methods on compiled byte-code. Which is unfortunate in a code-generation world. Rather than spending a ton of time working around this it would be really nice just to make the limit 10 times as big...

            It would also be helpful to have a little more insight into the contributors to this. For example jenkins scripts, whether declarative or pipeline, often have substantial blocks of text directly scripting the node (e.g. bash or whatever).  Does the size of such scripting reflect directly on groovy/java code size or are each of these treated as an opaque data blob who's size doesn't really matter? Is there some logging that enables me to see the current 'method size?'

            One does have to wonder. Is groovy really the proper vehicle for defining pipelines then?

            sgardell Steven Gardell added a comment - - edited This same behavior is seen with scripted pipelines and it can be worked around - with increased pain as the functional complexity of a pipeline grows.  Apparently this is due to a core jvm limitation of 64K methods on compiled byte-code. Which is unfortunate in a code-generation world. Rather than spending a ton of time working around this it would be really nice just to make the limit 10 times as big... It would also be helpful to have a little more insight into the contributors to this. For example jenkins scripts, whether declarative or pipeline, often have substantial blocks of text directly scripting the node (e.g. bash or whatever).  Does the size of such scripting reflect directly on groovy/java code size or are each of these treated as an opaque data blob who's size doesn't really matter? Is there some logging that enables me to see the current 'method size?' One does have to wonder. Is groovy really the proper vehicle for defining pipelines then?

            I just ran into this migrating from one orchestration multijob + multiple freestyle jobs to one pipeline (declarative plus matrix) on Jenkins 2.226.  I have multiple stages ( build / test / deploy ) with matrices inside them ( build on x y, test on x y z, deploy on x y ). My Jenkinsfile is 587 lines.

            smd Stefan Drissen added a comment - I just ran into this migrating from one orchestration multijob + multiple freestyle jobs to one pipeline (declarative plus  matrix ) on Jenkins 2.226.  I have multiple stages ( build / test / deploy ) with matrices inside them ( build on x y, test on x y z, deploy on x y ). My Jenkinsfile is 587 lines.
            amuniz Antonio Muñiz added a comment - - edited

            FTR: I'm hitting this with a "not-so-big" (and no matrix) pipeline, ~800 lines, it includes a few separated stages:

            • Build on Linux (and unit tests)
            • Build on Windows (and unit tests)
            • QA (spotbugs, checkstyle, etc)
            • Security analysis
            • Integration tests
            • Release
            amuniz Antonio Muñiz added a comment - - edited FTR: I'm hitting this with a "not-so-big" (and no matrix) pipeline, ~800 lines, it includes a few separated stages: Build on Linux (and unit tests) Build on Windows (and unit tests) QA (spotbugs, checkstyle, etc) Security analysis Integration tests Release
            jglick Jesse Glick added a comment -

            amuniz Scripted? (or “Declarative” with script blocks?) It is unknown whether a general fix is feasible for Scripted. Would likely require a redesign of the CPS transformer, which is possible in principle but this is one of the most difficult areas of Jenkins to edit.

            jglick Jesse Glick added a comment - amuniz Scripted? (or “Declarative” with script blocks?) It is unknown whether a general fix is feasible for Scripted. Would likely require a redesign of the CPS transformer, which is possible in principle but this is one of the most difficult areas of Jenkins to edit.
            llibicpep Dee Kryvenko added a comment -

            I've said this before in this thread, but as I keep getting notifications about new comments in this issue from people who refuse to admin their pipelines design sucks - I have prepared this detailed walkthrough.

            Using this technique, I've been able to run 100 stages in both scripted and declarative mode before I'm hitting this issue. I didn't tried the workaround by bitwiseman which might improve Declarative one even further. I want to emphasize on the fact if you are having even half of that many stages - you are doing CICD wrong. You need to fix your process. Jenkins just happens to be a first bottleneck you've faced down that path. That discussion can get really philosophical as we will need to properly redefine what's CI and what's CD, what is a Pipeline and why Jenkins is not a Cron with web-interface. I really have no desire to be doing it here.

            Exception might be matrix jobs, but, even then I'm not so sure though I admit there might be a valid use case with that many stages in that space. But even then - execute your scripted pipeline in chunks (details below) - and there is no limits at all, I've been able to run a pipeline with 10000 stages! Thought then my Jenkins fails to render that many stages in UI. But more on that later.

            Now, getting into the right way of doing Jenkins.

            First and foremost - your Jenkinsfile, no matter where it stored, must be small and simple. It doesn't have to tell WHAT to do, nor define any stages. All that is implementation details that you want to hide from your users.

            An example of such a Jenkinsfile:

            library 'method-code-too-large-demo'
            
            loopStagesScripted(100)
            

            Note it doesn't matter if you're going to use scripted or declarative pipelines at this point. Here, you just collecting a user input. In my example I have just one - a number that defines how many stages I want in my pipeline. In real world example it might be any input you need from the user - type of project, platform version, any package/dependency details, etc. Just collect that input in any form and shape you want and pass it to your library. In my example a demo library lives here https://github.com/llibicpep/method-code-too-large-demo and loopStagesScripted is a step I have defined in it.

            Now, it is up to the library to read the user input, do whatever calculations, and generate your pipeline on the fly and then execute it. But the trick is - the pipeline is just a skeleton, defines the stages and does not actually performs any steps. For the steps it will fall back to the library again. Resulting pipeline from that Jenkinsfile will be looking like this:

            stage('Stage 1') {
                podTemplate(yaml: getPod(1)) {
                    node(POD_LABEL) {
                        doSomethingBasedOnStageNameOrWhatever(1)
                    }
                }
            }
            
            stage('Stage 2') {
                podTemplate(yaml: getPod(2)) {
                    node(POD_LABEL) {
                        doSomethingBasedOnStageNameOrWhatever(2)
                    }
                }
            }
            
            ...
            

            Note in my example, intentionally to increase complexity of my pipeline to demonstrate that everything is possible, I am using Kubernetes plugin and I fallback to the library for my Pod definition calculation based on the user input too. So, my pipeline body doesn't really have much. Once the library generated the pipeline string (and you can be as creative as you want with the ways you go about user input and templating - I had some examples in this issue previously) - it uses the step evaluate to execute it. The actual steps lives in the library under doSomethingBasedOnStageNameOrWhatever, both step name and it's input may be coming from templating layer to do actual something.

            I wanted to emphasize on the fact that I didn't do my pipelines that way to work around this particular issue. Proper abstraction layers for stages (interfaces) and steps (implementation) just helps me keep my pretty complex CICD code in a good shape and order. It's readable, easy to understand, and also easily testable (both unit and integration testing).

            Like I said, I've been able to run 100 stages that way before it fails. Even if you really need more, which I doubt, you can execute that pipeline in chunks - for instance, each stage separately. There is no limit if you do it that way, I've run 10000 stages like that and didn't face Method code too large issue (though I did faced other issues like my Jenkins fails to render that many stages in Web). An example Jenkinsfile:

            library 'method-code-too-large-demo'
            
            loopStagesScriptedInChunks(10000)
            

            If you look into the library code, you'll see all it does is just call evaluate for each stage separately. There is a downside for this approach - Jenkins will not know all the stages in your pipeline ahead of time, so in UI stages will be popping up as they gets executed.

            Now, Declarative pipeline:

            library 'method-code-too-large-demo'
            
            loopStagesDeclarative(250)
            

            It will use the same technique as loopStagesScripted, except that the body of the pipeline generated will be Declarative style. It will get executed same way via evaluate, and will result into something like:

            pipeline {
              agent none
              stages {
            
                stage('Stage 1') {
                    agent {
                        kubernetes {
                            yaml getPod(1)
                        }
                    }
                    steps {
                        doSomethingBasedOnStageNameOrWhatever(1)
                    }
                }
            
                stage('Stage 2') {
                    agent {
                        kubernetes {
                            yaml getPod(2)
                        }
                    }
                    steps {
                        doSomethingBasedOnStageNameOrWhatever(2)
                    }
                }
            
            ...
            
              }
            }
            

            I hope whoever really wanted a solution - get it now. And whoever wants Jenkins to accommodate for their failures and maintain artificial and invalid use case - I'm really sorry for you.

            llibicpep Dee Kryvenko added a comment - I've said this before in this thread, but as I keep getting notifications about new comments in this issue from people who refuse to admin their pipelines design sucks - I have prepared this detailed walkthrough. Using this technique, I've been able to run 100 stages in both scripted and declarative mode before I'm hitting this issue. I didn't tried the workaround by bitwiseman which might improve Declarative one even further. I want to emphasize on the fact if you are having even half of that many stages - you are doing CICD wrong. You need to fix your process. Jenkins just happens to be a first bottleneck you've faced down that path. That discussion can get really philosophical as we will need to properly redefine what's CI and what's CD, what is a Pipeline and why Jenkins is not a Cron with web-interface. I really have no desire to be doing it here. Exception might be matrix jobs, but, even then I'm not so sure though I admit there might be a valid use case with that many stages in that space. But even then - execute your scripted pipeline in chunks (details below) - and there is no limits at all, I've been able to run a pipeline with 10000 stages! Thought then my Jenkins fails to render that many stages in UI. But more on that later. Now, getting into the right way of doing Jenkins. First and foremost - your Jenkinsfile, no matter where it stored, must be small and simple. It doesn't have to tell WHAT to do, nor define any stages. All that is implementation details that you want to hide from your users. An example of such a Jenkinsfile: library 'method-code-too-large-demo' loopStagesScripted(100) Note it doesn't matter if you're going to use scripted or declarative pipelines at this point. Here, you just collecting a user input. In my example I have just one - a number that defines how many stages I want in my pipeline. In real world example it might be any input you need from the user - type of project, platform version, any package/dependency details, etc. Just collect that input in any form and shape you want and pass it to your library. In my example a demo library lives here https://github.com/llibicpep/method-code-too-large-demo and loopStagesScripted is a step I have defined in it. Now, it is up to the library to read the user input, do whatever calculations, and generate your pipeline on the fly and then execute it. But the trick is - the pipeline is just a skeleton, defines the stages and does not actually performs any steps. For the steps it will fall back to the library again. Resulting pipeline from that Jenkinsfile will be looking like this: stage( 'Stage 1' ) { podTemplate(yaml: getPod(1)) { node(POD_LABEL) { doSomethingBasedOnStageNameOrWhatever(1) } } } stage( 'Stage 2' ) { podTemplate(yaml: getPod(2)) { node(POD_LABEL) { doSomethingBasedOnStageNameOrWhatever(2) } } } ... Note in my example, intentionally to increase complexity of my pipeline to demonstrate that everything is possible, I am using Kubernetes plugin and I fallback to the library for my Pod definition calculation based on the user input too. So, my pipeline body doesn't really have much. Once the library generated the pipeline string (and you can be as creative as you want with the ways you go about user input and templating - I had some examples in this issue previously) - it uses the step evaluate to execute it. The actual steps lives in the library under doSomethingBasedOnStageNameOrWhatever , both step name and it's input may be coming from templating layer to do actual something . I wanted to emphasize on the fact that I didn't do my pipelines that way to work around this particular issue. Proper abstraction layers for stages (interfaces) and steps (implementation) just helps me keep my pretty complex CICD code in a good shape and order. It's readable, easy to understand, and also easily testable (both unit and integration testing). Like I said, I've been able to run 100 stages that way before it fails. Even if you really need more, which I doubt, you can execute that pipeline in chunks - for instance, each stage separately. There is no limit if you do it that way, I've run 10000 stages like that and didn't face Method code too large issue (though I did faced other issues like my Jenkins fails to render that many stages in Web). An example Jenkinsfile: library 'method-code-too-large-demo' loopStagesScriptedInChunks(10000) If you look into the library code, you'll see all it does is just call evaluate for each stage separately. There is a downside for this approach - Jenkins will not know all the stages in your pipeline ahead of time, so in UI stages will be popping up as they gets executed. Now, Declarative pipeline: library 'method-code-too-large-demo' loopStagesDeclarative(250) It will use the same technique as loopStagesScripted , except that the body of the pipeline generated will be Declarative style. It will get executed same way via evaluate , and will result into something like: pipeline { agent none stages { stage( 'Stage 1' ) { agent { kubernetes { yaml getPod(1) } } steps { doSomethingBasedOnStageNameOrWhatever(1) } } stage( 'Stage 2' ) { agent { kubernetes { yaml getPod(2) } } steps { doSomethingBasedOnStageNameOrWhatever(2) } } ... } } I hope whoever really wanted a solution - get it now. And whoever wants Jenkins to accommodate for their failures and maintain artificial and invalid use case - I'm really sorry for you.
            wb8tyw John Malmberg added a comment -

            Matrix builds are currently not viable if any stage in the Matrix is skipped with a when clause.

            The job execution logic appears to work correctly, but the WEB UI is totally useless for both traditional and Blue Ocean views.

            So we can not use Matrix builds as an alternative to this until https://issues.jenkins-ci.org/browse/JENKINS-62034 is fixed.

            wb8tyw John Malmberg added a comment - Matrix builds are currently not viable if any stage in the Matrix is skipped with a when clause. The job execution logic appears to work correctly, but the WEB UI is totally useless for both traditional and Blue Ocean views. So we can not use Matrix builds as an alternative to this until https://issues.jenkins-ci.org/browse/JENKINS-62034 is fixed.
            brianjmurrell Brian J Murrell added a comment - - edited

            llibicpep Your explanation and example comment of how the rest of us are all doing CI/CD wrong seems to assume everyone is running very simple and identical stages as your loopStagesDeclarative.groovy example demonstrates.

            I doubt anyone here with this problem has 100 identical, and very simple stages as your example demonstrates. Why don't you try having your example create real stages that have multiple-condition when clauses and post clauses with multiple post sub-clauses in them and see how many stages you can get.

            But moreover, how do you propose your solution solves the problem for people with various different Build, Test and Deployment stages utilising a looping pipeline generator such as you propose?

            So while you can be congratulated on having a 100-stage pipeline, you have to admit that they are not 100 useful and unique stages, are they?

            Can you point to your real-world useful Jenkinsfile and pipeline library where you implement your proposed technique so that we can all see what we are doing wrong?

            brianjmurrell Brian J Murrell added a comment - - edited llibicpep Your explanation and example comment of how the rest of us are all doing CI/CD wrong seems to assume everyone is running very simple and identical stages as your loopStagesDeclarative.groovy  example demonstrates. I doubt anyone here with this problem has 100 identical, and very simple stages as your example demonstrates. Why don't you try having your example create real stages that have multiple-condition  when  clauses and post clauses with multiple post sub-clauses in them and see how many stages you can get. But moreover, how do you propose your solution solves the problem for people with various different Build, Test and Deployment stages utilising a looping pipeline generator such as you propose? So while you can be congratulated on having a 100-stage pipeline, you have to admit that they are not 100 useful and unique stages, are they? Can you point to your real-world useful Jenkinsfile and pipeline library where you implement your proposed technique so that we can all see what we are doing wrong?

            Maybe somebody (at jenkins-ci.org) can tell us all here if there is any hope of this ever being fixed or if this is the end of the road for Jenkins for anyone needing anything more than trivial pipelines, and who have already factored out their entire pipelines into libraries such that their Jenkinsfile does nothing more than orchestrate stages to call library functions on agents when conditions are right for that stage to run.

            brianjmurrell Brian J Murrell added a comment - Maybe somebody (at jenkins-ci.org) can tell us all here if there is any hope of this ever being fixed or if this is the end of the road for Jenkins for anyone needing anything more than trivial pipelines, and who have already factored out their entire pipelines into libraries such that their Jenkinsfile does nothing more than orchestrate stages to call library functions on agents when conditions are right for that stage to run.
            llibicpep Dee Kryvenko added a comment -

            brianjmurrell I never said stages must be identical or similar for this to work. I run very complex CICD platform based on Jenkins that supports CI for ~20 platforms types (maven, gradle, npm, python, golang, dotnet, php, ruby, docker, chef cookbooks, helm, terraform, etc) with various CD deployment methods (chef, terraform, helm, ECS, codedeploy, etc). It allows various combinations of these CI and CD and quality gates in between stages (linting, sonar, integration testing, cost analysis and various security scans), and it manages about ~300 applications.

            I can't solve your problems for you. My example on GitHub obviously was not a real world example and it's merely purpose is to demonstrate the concept. I can't just share my proprietary code with you, I put some effort on my personal free time to put that example together. Yet I am pretty sure it is sufficient for anyone with minimal programming experience to understand what am I talking about. At the end of the day abstractions, templating and decomposition aren't exactly new concepts.

            I can't say I'm always happy with Jenkins and it really feels like a XIX century tool sometimes, but this particular problem a lot of people are moaning about in this ticket is really easily solved and avoided - should anyone put at least some effort into design instead of ad-hoc scripting whatever comes into their mind first. It gets pretty bad pretty fast if technical debt levels not managed.

            If you didn't do your due diligence at a time and now the system collapses on you like that - there aren't many people to blame in that.

            llibicpep Dee Kryvenko added a comment - brianjmurrell I never said stages must be identical or similar for this to work. I run very complex CICD platform based on Jenkins that supports CI for ~20 platforms types (maven, gradle, npm, python, golang, dotnet, php, ruby, docker, chef cookbooks, helm, terraform, etc) with various CD deployment methods (chef, terraform, helm, ECS, codedeploy, etc). It allows various combinations of these CI and CD and quality gates in between stages (linting, sonar, integration testing, cost analysis and various security scans), and it manages about ~300 applications. I can't solve your problems for you. My example on GitHub obviously was not a real world example and it's merely purpose is to demonstrate the concept. I can't just share my proprietary code with you, I put some effort on my personal free time to put that example together. Yet I am pretty sure it is sufficient for anyone with minimal programming experience to understand what am I talking about. At the end of the day abstractions, templating and decomposition aren't exactly new concepts. I can't say I'm always happy with Jenkins and it really feels like a XIX century tool sometimes, but this particular problem a lot of people are moaning about in this ticket is really easily solved and avoided - should anyone put at least some effort into design instead of ad-hoc scripting whatever comes into their mind first. It gets pretty bad pretty fast if technical debt levels not managed. If you didn't do your due diligence at a time and now the system collapses on you like that - there aren't many people to blame in that.
            llibicpep Dee Kryvenko added a comment -

            Let me give you another hint.

            Stop thinking about CICD in terms of the stages and conditions when to run them. It's not automation. It's mechanization.

            Think about CICD in terms of what do you want to achieve - you want to lint source code, build an artifact, test it, build/update env, deploy the artifact there, test, scan, etc....

            llibicpep Dee Kryvenko added a comment - Let me give you another hint. Stop thinking about CICD in terms of the stages and conditions when to run them. It's not automation. It's mechanization. Think about CICD in terms of what do you want to achieve - you want to lint source code, build an artifact, test it, build/update env, deploy the artifact there, test, scan, etc....
            bitwiseman Liam Newman added a comment -

            llibicpep

            brianjmurrell

            I'd appreciate if both of you would take a minute to stop, calm down, and review the Jenkins Code of Conduct . 

            Please treat each other with respect and kindness.  We're all trying to make the project better and help each other out. 

             

            Brian,

            I've been meaning to take another swing at improving this.  I'll take another look at it this week.

             

            bitwiseman Liam Newman added a comment - llibicpep brianjmurrell I'd appreciate if both of you would take a minute to stop, calm down, and review the Jenkins Code of Conduct  .  Please treat each other with respect and kindness.  We're all trying to make the project better and help each other out.    Brian, I've been meaning to take another swing at improving this.  I'll take another look at it this week.  

            bitwiseman, I want to thank you for trying to cool things down here.

             

            There has certainly been a lot of contention about the importance of this ticket in the comments.  It's really frustrating to run into the "Method Code too Large" error and we can all get a little hotheaded about something like this. I too have run into this issue myself and I've had many times where I have had refactor my pipeline into a way that makes it very hard to read and to maintain. 

             

            I really hope you are able improve this. Even if you aren't able eliminate or reduce this problem, it would be very helpful to check without having to run it. I use the pipeline-model-converter/validate to lint my pipeline but it won't tell me if Jenkins can handle my pipeline until it's run. 

            henryborchers Henry Borchers added a comment - bitwiseman , I want to thank you for trying to cool things down here.   There has certainly been a lot of contention about the importance of this ticket in the comments.  It's really frustrating to run into the "Method Code too Large" error and we can all get a little hotheaded about something like this. I too have run into this issue myself and I've had many times where I have had refactor my pipeline into a way that makes it very hard to read and to maintain.    I really hope you are able improve this. Even if you aren't able eliminate or reduce this problem, it would be very helpful to check without having to run it. I use the pipeline-model-converter/validate to lint my pipeline but it won't tell me if Jenkins can handle my pipeline until it's run. 
            jcastillorp Jim Castillo added a comment -

            Agree, we run into this often.  While doable to refactor, which we have done, and continually do every 2 months or so, it has caused a lot of time and effort to maintain as well as undesirable obfuscated build and deploy code.

            Yes, it works to refactor, but that doesn't seem to be what people are asking for or need.   Or at least for us, we would like to see alternatives then refactor.

            I appreciate the time and effort that goes into maintaining Open Source and love to support Jenkins and the community so I want to offer my thanks.

             

             

             

             

             

             

            jcastillorp Jim Castillo added a comment - Agree, we run into this often.  While doable to refactor, which we have done, and continually do every 2 months or so, it has caused a lot of time and effort to maintain as well as undesirable obfuscated build and deploy code. Yes, it works to refactor, but that doesn't seem to be what people are asking for or need.   Or at least for us, we would like to see alternatives then refactor. I appreciate the time and effort that goes into maintaining Open Source and love to support Jenkins and the community so I want to offer my thanks.            

            Even with all of the bad effects of having to refactor, such as the obfuscation and indirection of having so much code in so many places, (a Jenkinsfile, libraries, etc.), refactoring itself has a finite limitation to it's effectiveness as a solution.

            https://github.com/daos-stack/daos/blob/master/Jenkinsfile is a Jenkinsfile that is on the verge of Method Code too Large (I know, because I am trying to add a new Build stage and am getting that error) and as you can see in it, it is merely a framework of a Jenksinfile that calls out to library functions to do all of it's work.  I don't know that there is much opportunity for more refactoring in that file.  It's already a Jenkinsfile of single-line steps.

            What do you do when you have already factored all of the functionality that you can out of your Jenkinsfile and still hit the error?

             

            brianjmurrell Brian J Murrell added a comment - Even with all of the bad effects of having to refactor, such as the obfuscation and indirection of having so much code in so many places, (a  Jenkinsfile , libraries, etc.), refactoring itself has a finite limitation to it's effectiveness as a solution. https://github.com/daos-stack/daos/blob/master/Jenkinsfile  is a Jenkinsfile  that is on the verge of  Method Code too Large  (I know, because I am trying to add a new Build stage and am getting that error) and as you can see in it, it is merely a framework of a Jenksinfile  that calls out to library functions to do all of it's work.  I don't know that there is much opportunity for more refactoring in that file.  It's already a Jenkinsfile of single-line steps. What do you do when you have already factored all of the functionality that you can out of your Jenkinsfile and still hit the error?  
            mueck Carsten Mück added a comment -

            Haven't been in the materia for long time, but back when I had the problem it helped to source out methods into another file and just load that file.
            The Method code too large exception seems to just appear on loading the jenkinsfile, so when you load the other Jenkinsfile (after checking it out from an scm or whereever you have put the extra code) it is free to load that file without the exception.

            So back then my solution was to have the initial Jenkinsfile which calls a Jenkinsilfe.method.groovy after chechking that out

            Hope this helps someone, even though it is no clean solution as some errors just appear when the file is loaded (like simple compilation errors can be hidden until then)

            mueck Carsten Mück added a comment - Haven't been in the materia for long time, but back when I had the problem it helped to source out methods into another file and just load that file. The Method code too large exception seems to just appear on loading the jenkinsfile, so when you load the other Jenkinsfile (after checking it out from an scm or whereever you have put the extra code) it is free to load that file without the exception. So back then my solution was to have the initial Jenkinsfile which calls a Jenkinsilfe.method.groovy after chechking that out Hope this helps someone, even though it is no clean solution as some errors just appear when the file is loaded (like simple compilation errors can be hidden until then)

            mueck You wouldn't have a more concrete example of your solution that you could point at would you?  Your work-around sounds interesting but I am not sure I am familiar with the methodology that you are describing.

            brianjmurrell Brian J Murrell added a comment - mueck You wouldn't have a more concrete example of your solution that you could point at would you?  Your work-around sounds interesting but I am not sure I am familiar with the methodology that you are describing.
            mueck Carsten Mück added a comment -

            I currently don't have a good example at hand, but this stack overflow does show it a bit.
            https://stackoverflow.com/a/51780707

            If you load a file and call a method which by itself calls the methods you have in your git example, then you already saved one call that would lead to the Method Code too large exception. And you could also call even more methods in that one method inside you main Jenkinsfile

            mueck Carsten Mück added a comment - I currently don't have a good example at hand, but this stack overflow does show it a bit. https://stackoverflow.com/a/51780707 If you load a file and call a method which by itself calls the methods you have in your git example, then you already saved one call that would lead to the Method Code too large exception. And you could also call even more methods in that one method inside you main Jenkinsfile
            llibicpep Dee Kryvenko added a comment -

            Brian, indirection and abstraction is not always an obfuscation and is not always bad. 1297 lines of code in the Jenkinsfile by your link - not exactly readable and maintainable and is a sign of lots of duplications. Here is a few suggestions to start with:

            1. Just to address the big elephant in the room - this first ~300 lines of Jenkisfile is code and code doesn't belong to a Jenkinsfile.
            2. Most of the build stages (~500 lines) are going against DRY principle - it is basically the same code with small tweaks per platform. It can be defined in a library. Any "block" in a Jenkinsfile basically nothing more than a Groovy closure so it is perfectly fine to do some code generation and return closure by input parameter[s] as a stage body from the lib step. Your Jenkinsfile might be looking like this:
            stages {
             stage('Build RPM on CentOS 7', getBuildStage('centos7'))
             stage('Build RPM on Leap 15', getBuildStage('...'))
             stage('Build on CentOS 7', getBuildStage('...'))
             stage('Build on CentOS 7 Bullseye', getBuildStage('...'))
             stage('Build on CentOS 7 debug', getBuildStage('...'))
             stage('Build on CentOS 7 release', getBuildStage('...'))
             stage('Build on CentOS 7 with Clang', getBuildStage('...'))
             stage('Build on Ubuntu 20.04 with Clang', getBuildStage('...'))
             stage('Build on Leap 15', getBuildStage('...'))
             stage('Build on Leap 15 with Clang', getBuildStage('...'))
             stage('Build on Leap 15 with Intel-C and TARGET_PREFIX', getBuildStage('...'))
            }

            ~500 lines to ~10 lines reduction right there.
            That potentially applies to the test stages as well - haven't looked closely to whether they are also violating DRY or actually different. Though worth mentioning literally any stage can be stored in a lib, no matter reusable or not. Having Jenkinsfile as nice and clean orchestrator and hiding implementation somewhere almost always is a good idea. The entire `pipeline` and it's body can be sourced from a lib.

            1. Is that many stages actually needed? There might be an opportunity to rely more on a feature flags vs separate binaries, which makes sense not only from the CI point of view but also helps reducing amount of time needed to test everything as well as the overall complexity.
            2. And like I said, most likely you are having more than one repository so having implementation in the Jenkinsfile leads to lots of code duplication. That is to say, all the suggestions I were making is just a common sense suggestions and should have been done NOT because of this "Method code too large" error but because it does makes sense to do. When you want to reuse some code, you don't copy-paste it do you? You publish it as a library and then consume as a dependency. Jenkinsfiles are not any different. It should have been done from the get go and not in a response of a system collapse. You don't code your projects in a single file without architecture and design and start splitting it up as an aftermath to the issues, do you? Why Jenkinsfile is not like that? And I did walked through other repos in your Org to confirm what I'm saying, and I found out you actually perfectly understanding what I'm saying here as all of your other repos already using something like `packageBuildingPipelineDAOS`, so I am not entirely sure what is this conversation is all about. Whether an abstraction you came up with feels like an obfuscation or a simplification is totally up to how you implement it.
            llibicpep Dee Kryvenko added a comment - Brian, indirection and abstraction is not always an obfuscation and is not always bad. 1297 lines of code in the Jenkinsfile by your link - not exactly readable and maintainable and is a sign of lots of duplications. Here is a few suggestions to start with: Just to address the big elephant in the room - this first ~300 lines of Jenkisfile is code and code doesn't belong to a Jenkinsfile. Most of the build stages (~500 lines) are going against DRY principle - it is basically the same code with small tweaks per platform. It can be defined in a library. Any "block" in a Jenkinsfile basically nothing more than a Groovy closure so it is perfectly fine to do some code generation and return closure by input parameter [s] as a stage body from the lib step. Your Jenkinsfile might be looking like this: stages { stage( 'Build RPM on CentOS 7' , getBuildStage( 'centos7' )) stage( 'Build RPM on Leap 15' , getBuildStage( '...' )) stage( 'Build on CentOS 7' , getBuildStage( '...' )) stage( 'Build on CentOS 7 Bullseye' , getBuildStage( '...' )) stage( 'Build on CentOS 7 debug' , getBuildStage( '...' )) stage( 'Build on CentOS 7 release' , getBuildStage( '...' )) stage( 'Build on CentOS 7 with Clang' , getBuildStage( '...' )) stage( 'Build on Ubuntu 20.04 with Clang' , getBuildStage( '...' )) stage( 'Build on Leap 15' , getBuildStage( '...' )) stage( 'Build on Leap 15 with Clang' , getBuildStage( '...' )) stage( 'Build on Leap 15 with Intel-C and TARGET_PREFIX' , getBuildStage( '...' )) } ~500 lines to ~10 lines reduction right there. That potentially applies to the test stages as well - haven't looked closely to whether they are also violating DRY or actually different. Though worth mentioning literally any stage can be stored in a lib, no matter reusable or not. Having Jenkinsfile as nice and clean orchestrator and hiding implementation somewhere almost always is a good idea. The entire `pipeline` and it's body can be sourced from a lib. Is that many stages actually needed? There might be an opportunity to rely more on a feature flags vs separate binaries, which makes sense not only from the CI point of view but also helps reducing amount of time needed to test everything as well as the overall complexity. And like I said, most likely you are having more than one repository so having implementation in the Jenkinsfile leads to lots of code duplication. That is to say, all the suggestions I were making is just a common sense suggestions and should have been done NOT because of this "Method code too large" error but because it does makes sense to do. When you want to reuse some code, you don't copy-paste it do you? You publish it as a library and then consume as a dependency. Jenkinsfiles are not any different. It should have been done from the get go and not in a response of a system collapse. You don't code your projects in a single file without architecture and design and start splitting it up as an aftermath to the issues, do you? Why Jenkinsfile is not like that? And I did walked through other repos in your Org to confirm what I'm saying, and I found out you actually perfectly understanding what I'm saying here as all of your other repos already using something like `packageBuildingPipelineDAOS`, so I am not entirely sure what is this conversation is all about. Whether an abstraction you came up with feels like an obfuscation or a simplification is totally up to how you implement it.

            Just to address the big elephant in the room - this first ~300 lines of Jenkisfile is code and code doesn't belong to a Jenkinsfile.

            Perhaps, or perhaps not. But it is orthogonal to the issue at hand as (as I understand it) everything outside of the pipeline block does NOT contribute to the actual error everyone is trying to work around here. One can debate the usability of having code located where it's used (only once and doesn't need to be moved for DRY purposes) vs. having to refer to a completely different project/library (such as a pipeline library). But again, that is not germane to and is completely unrelated to the issue at hand here, so let's not get distracted by such a debate.

            Most of the build stages (~500 lines) are going against DRY principle

            I cannot disagree with you here. But this is what Pipeline forces one to do. In theory, Matrix is supposed to be the way to alleviate this however Matrix has a number of aesthetic and (moreover) actual functionality bugs that prevent it from being used.

            While it's clear to me how a whole Jenkinsfile can be put into a library, and re-used such as we do with packageBuildingPipelineDAOS how we would use the stages block as the only thing in a Jenkinsfile as you do above is very unclear to me. I obviously don't have as deep an understanding (nor do I feel I should actually need to – but that's beside the point) of how Jenkins processes it's Jenkinsfile and turns that into Java/Groovy, so maybe you can enlighten me on how that works.  What sort of thing is a getBuildStage() function allowed to actually return?  You seem to be indicating that it can be much more than simply the functionality of a step – such as a whole stage which does not contribute to this Method Code too Large error.

            I have never seen any such construct defined or documented anywhere.  Even the Jenkinsfile as a function in a library is documented.

            brianjmurrell Brian J Murrell added a comment - Just to address the big elephant in the room - this first ~300 lines of Jenkisfile is code and code doesn't belong to a Jenkinsfile. Perhaps, or perhaps not. But it is orthogonal to the issue at hand as (as I understand it) everything outside of the pipeline block does NOT contribute to the actual error everyone is trying to work around here. One can debate the usability of having code located where it's used (only once and doesn't need to be moved for DRY purposes) vs. having to refer to a completely different project/library (such as a pipeline library). But again, that is not germane to and is completely unrelated to the issue at hand here, so let's not get distracted by such a debate. Most of the build stages (~500 lines) are going against DRY principle I cannot disagree with you here. But this is what Pipeline forces one to do. In theory, Matrix is supposed to be the way to alleviate this however Matrix has a number of aesthetic and (moreover) actual functionality bugs that prevent it from being used. While it's clear to me how a whole Jenkinsfile can be put into a library, and re-used such as we do with packageBuildingPipelineDAOS how we would use the stages block as the only thing in a Jenkinsfile as you do above is very unclear to me. I obviously don't have as deep an understanding (nor do I feel I should actually need to – but that's beside the point) of how Jenkins processes it's Jenkinsfile and turns that into Java/Groovy, so maybe you can enlighten me on how that works.  What sort of thing is a getBuildStage()  function allowed to actually return?  You seem to be indicating that it can be much more than simply the functionality of a step – such as a whole stage which does not contribute to this  Method Code too Large error. I have never seen any such construct defined or documented anywhere.  Even the Jenkinsfile as a function in a library is documented.
            llibicpep Dee Kryvenko added a comment - - edited

            Brian, my apology - I just realized what I suggested above will not work for Declarative pipelines, which is the flavor of pipelines you are using. But let me make a few remarks on that:

            1. As you move towards the separation of abstraction and implementation, which in my opinion is inevitable for any more or less complex pipelines, maybe it is worth revisiting what you need a Declarative pipelines for? Think about this, Declarative opinionated syntax was made for human consumption, but in the scenario with lib-pipeline-factory humans don't interact with that syntax anymore - their new interface is a statements like `packageBuildingPipelineDAOS`. These new interfaces needs to be declarative and readable - and you define them on your own to the best of your liking. The pipeline DSL body itself is nothing more than a middle layer now that gets generated by a library and executed by a Jenkins. It's syntax doesn't matter as much anymore. Switching to Scripted pipelines in that scenario opens up the doors for much more flexibility, as Declarative syntax is so much artificially limited (for the sake of being opinionated). Though worth mentioning few features like "Restart from Stage" currently not available for Scripted pipelines, but since they are programmatically generated by a library now - it would be extremely easy to just accept an input variable indicating which stage to restart from and generate a pipeline starting from only that stage.
            2. For Jenkins maintainers - allowing the syntax I suggested above for Declarative pipeline might be a partial solution (or remediation at the very least) to this issue. From technical standpoint I imagine this limitation is artificial, at the end of the day any Jenkinsfile scripted or declarative is a super-set of Groovy and `{...}` expression is always a closure. Allowing library steps to return closure instances in the Declarative pipeline (which can still be validated for declarative syntax), and allowing it to be used as body for `stage`, `when`, `agent` and etc blocks sounds like a good idea to me. In fact now that I'm thinking about this, this probably is the only major obstacle for the people like myself to get rid of scripted pipelines altogether. If I can programmatically generate Declarative pipeline in a library - I can get the best of two worlds.
            llibicpep Dee Kryvenko added a comment - - edited Brian, my apology - I just realized what I suggested above will not work for Declarative pipelines, which is the flavor of pipelines you are using. But let me make a few remarks on that: As you move towards the separation of abstraction and implementation, which in my opinion is inevitable for any more or less complex pipelines, maybe it is worth revisiting what you need a Declarative pipelines for? Think about this, Declarative opinionated syntax was made for human consumption, but in the scenario with lib-pipeline-factory humans don't interact with that syntax anymore - their new interface is a statements like `packageBuildingPipelineDAOS`. These new interfaces needs to be declarative and readable - and you define them on your own to the best of your liking. The pipeline DSL body itself is nothing more than a middle layer now that gets generated by a library and executed by a Jenkins. It's syntax doesn't matter as much anymore. Switching to Scripted pipelines in that scenario opens up the doors for much more flexibility, as Declarative syntax is so much artificially limited (for the sake of being opinionated). Though worth mentioning few features like "Restart from Stage" currently not available for Scripted pipelines, but since they are programmatically generated by a library now - it would be extremely easy to just accept an input variable indicating which stage to restart from and generate a pipeline starting from only that stage. For Jenkins maintainers - allowing the syntax I suggested above for Declarative pipeline might be a partial solution (or remediation at the very least) to this issue. From technical standpoint I imagine this limitation is artificial, at the end of the day any Jenkinsfile scripted or declarative is a super-set of Groovy and `{...}` expression is always a closure. Allowing library steps to return closure instances in the Declarative pipeline (which can still be validated for declarative syntax), and allowing it to be used as body for `stage`, `when`, `agent` and etc blocks sounds like a good idea to me. In fact now that I'm thinking about this, this probably is the only major obstacle for the people like myself to get rid of scripted pipelines altogether. If I can programmatically generate Declarative pipeline in a library - I can get the best of two worlds.
            jglick Jesse Glick added a comment -

            Taking a break from discussion of impact and workarounds, some thoughts on the implementation side.

            Ultimately this is a limitation of the JVM. You can see something similar without Jenkins, albeit artificially, by just making a Groovy source consisting of, say,

            println(13)
            

            repeated a few thousand times, and trying to run it:

            org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
            General error during class generation: Class file too large!
            
            java.lang.RuntimeException: Class file too large!
            	at groovyjarjarasm.asm.ClassWriter.toByteArray(Unknown Source)
            	at org.codehaus.groovy.control.CompilationUnit$17.call(CompilationUnit.java:827)
            	at org.codehaus.groovy.control.CompilationUnit.applyToPrimaryClassNodes(CompilationUnit.java:1065)
            	at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:603)
            	at org.codehaus.groovy.control.CompilationUnit.processPhaseOperations(CompilationUnit.java:581)
            	at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:558)
            	at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:298)
            	at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:268)
            	at groovy.lang.GroovyShell.parseClass(GroovyShell.java:688)
            	at groovy.lang.GroovyShell.run(GroovyShell.java:517)
            	at groovy.lang.GroovyShell.run(GroovyShell.java:507)
            	at groovy.ui.GroovyMain.processOnce(GroovyMain.java:653)
            	at groovy.ui.GroovyMain.run(GroovyMain.java:384)
            	at groovy.ui.GroovyMain.process(GroovyMain.java:370)
            	at groovy.ui.GroovyMain.processArgs(GroovyMain.java:129)
            	at groovy.ui.GroovyMain.main(GroovyMain.java:109)
            	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
            	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
            	at java.lang.reflect.Method.invoke(Method.java:498)
            	at org.codehaus.groovy.tools.GroovyStarter.rootLoader(GroovyStarter.java:109)
            	at org.codehaus.groovy.tools.GroovyStarter.main(GroovyStarter.java:131)
            
            1 error
            

            The reason this is particularly noxious for Pipeline script is that the CPS and sandbox transforms result in considerable code bloat, so a method which might have been compiled by a stock Groovy runtime into a few Kb winds up going over the limit. And I suppose the problem is particularly noticeable for Declarative because of the big single implicit Script.run method (i.e., the main body), which the Declarative plugin (pipeline-model-definition) works around in some cases but cannot deal with when you are effectively mixing bits of Scripted into a mostly Declarative structure, as people often do since there is nothing forbidding it (alas).

            The CPS transformer could try to detect methods which are going to be excessively big (somehow?), and then internally rewrite them to use subroutines, shifted to other classes where necessary. It just seems like it could get very complicated to figure out when it is safe to do this and how to do it. If a method body in general contains local variables in various scopes and various sorts of control structures, breaking pieces of it off while preserving semantics is a challenging compiler (de-)optimization. You could probably do a somewhat simpler trick, activated only for big methods, which runs every single instruction as a separate method. The result would definitely be slower to load and run but it might work. Either way, your stack traces are going to look very confusing unless you do further work to hide synthetic stack frames. Offhand I would expect this sort of thing to be on the scale of a Google Summer of Code project, for someone with a deep computer science background, and it would be quite risky (large risk of regression).

            Going forward, I would think this level of effort would be better spent in making jenkinsfile-runner able to run stock Groovy—it already turns off the sandbox transformer, but turning off the CPS transformer would require a bunch of work in workflow-cps—and/or creating a new FlowDefinition which runs stock Groovy in a separate process while flipping control flow back and forth with the controller (a.k.a. external Pipeline execution). There are numerous other problems with the CPS transformation and it does not seem prudent to make massive changes to that code, which was written by Kohsuke before he moved on and which only a handful of people in the world begin to understand.

            jglick Jesse Glick added a comment - Taking a break from discussion of impact and workarounds, some thoughts on the implementation side. Ultimately this is a limitation of the JVM. You can see something similar without Jenkins, albeit artificially, by just making a Groovy source consisting of, say, println(13) repeated a few thousand times, and trying to run it: org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed: General error during class generation: Class file too large! java.lang.RuntimeException: Class file too large! at groovyjarjarasm.asm.ClassWriter.toByteArray(Unknown Source) at org.codehaus.groovy.control.CompilationUnit$17.call(CompilationUnit.java:827) at org.codehaus.groovy.control.CompilationUnit.applyToPrimaryClassNodes(CompilationUnit.java:1065) at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:603) at org.codehaus.groovy.control.CompilationUnit.processPhaseOperations(CompilationUnit.java:581) at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:558) at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:298) at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:268) at groovy.lang.GroovyShell.parseClass(GroovyShell.java:688) at groovy.lang.GroovyShell.run(GroovyShell.java:517) at groovy.lang.GroovyShell.run(GroovyShell.java:507) at groovy.ui.GroovyMain.processOnce(GroovyMain.java:653) at groovy.ui.GroovyMain.run(GroovyMain.java:384) at groovy.ui.GroovyMain.process(GroovyMain.java:370) at groovy.ui.GroovyMain.processArgs(GroovyMain.java:129) at groovy.ui.GroovyMain.main(GroovyMain.java:109) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.codehaus.groovy.tools.GroovyStarter.rootLoader(GroovyStarter.java:109) at org.codehaus.groovy.tools.GroovyStarter.main(GroovyStarter.java:131) 1 error The reason this is particularly noxious for Pipeline script is that the CPS and sandbox transforms result in considerable code bloat, so a method which might have been compiled by a stock Groovy runtime into a few Kb winds up going over the limit. And I suppose the problem is particularly noticeable for Declarative because of the big single implicit Script.run method (i.e., the main body), which the Declarative plugin ( pipeline-model-definition ) works around in some cases but cannot deal with when you are effectively mixing bits of Scripted into a mostly Declarative structure, as people often do since there is nothing forbidding it (alas). The CPS transformer could try to detect methods which are going to be excessively big (somehow?), and then internally rewrite them to use subroutines, shifted to other classes where necessary. It just seems like it could get very complicated to figure out when it is safe to do this and how to do it. If a method body in general contains local variables in various scopes and various sorts of control structures, breaking pieces of it off while preserving semantics is a challenging compiler (de-)optimization. You could probably do a somewhat simpler trick, activated only for big methods, which runs every single instruction as a separate method. The result would definitely be slower to load and run but it might work. Either way, your stack traces are going to look very confusing unless you do further work to hide synthetic stack frames. Offhand I would expect this sort of thing to be on the scale of a Google Summer of Code project, for someone with a deep computer science background, and it would be quite risky (large risk of regression). Going forward, I would think this level of effort would be better spent in making jenkinsfile-runner able to run stock Groovy—it already turns off the sandbox transformer, but turning off the CPS transformer would require a bunch of work in workflow-cps —and/or creating a new FlowDefinition which runs stock Groovy in a separate process while flipping control flow back and forth with the controller (a.k.a. external Pipeline execution ). There are numerous other problems with the CPS transformation and it does not seem prudent to make massive changes to that code, which was written by Kohsuke before he moved on and which only a handful of people in the world begin to understand.
            brianjmurrell Brian J Murrell added a comment - - edited

            A big part of the non-technical frustration that this issue causes for me is that this very severe and show-stopping scaling limitation is not documented anywhere (that I came across at least). It's only once one has spent a huge amount of time developing a pipeline that one "runs into" this issue and is first introduced to it.  One then spends at least as much time iteratively trying to refactor one's Jenkinsfile to try to stay under the limit.  But one does this knowing there is still going to be a limit to how much refactoring can be done and that one day, one is going to have refactored as much as one can and still not be able to add any more stages to their Jenkinsfile.

            This scaling limitation needs to be very clearly and prominently documented right at the start of the Jenkins Pipeline documentation.  It's the first thing that somebody diving into Declarative Pipelines should know.  They should know before they even start that Declarative Pipeline has a scale limit and that unless their workflow is small and limited, that one day they will no longer be able to add anything to their pipeline and that on that day, it's the end of life for their Pipeline.

            Heck.  This scaling limitation is not even mentioned in the  Scaling Pipelines document.

            brianjmurrell Brian J Murrell added a comment - - edited A big part of the non-technical frustration that this issue causes for me is that this very severe and show-stopping scaling limitation is not documented anywhere (that I came across at least). It's only once one has spent a huge amount of time developing a pipeline that one "runs into" this issue and is first introduced to it.  One then spends at least as much time iteratively trying to refactor one's Jenkinsfile to try to stay under the limit.  But one does this knowing there is still going to be a limit to how much refactoring can be done and that one day, one is going to have refactored as much as one can and still not be able to add any more stages to their Jenkinsfile . This scaling limitation needs to be very clearly and prominently documented right at the start of the Jenkins Pipeline documentation.  It's the first thing that somebody diving into Declarative Pipelines should know.  They should know before they even start that Declarative Pipeline has a scale limit and that unless their workflow is small and limited, that one day they will no longer be able to add anything to their pipeline and that on that day, it's the end of life for their Pipeline. Heck.  This scaling limitation is not even mentioned in the   Scaling Pipelines document.
            kgiloo kgiloo added a comment -

            brianjmurrell: perhaps the most advisable comment I've ever seen concerning this issue...

             

            kgiloo kgiloo added a comment - brianjmurrell : perhaps the most advisable comment I've ever seen concerning this issue...  
            jglick Jesse Glick added a comment -

            CpsGroovyShell.reparse could certainly detect this exception message and send you to a jenkins.io redirect link taking you to a page with a description of the causes and suggested workarounds.

            jglick Jesse Glick added a comment - CpsGroovyShell.reparse could certainly detect this exception message and send you to a jenkins.io redirect link taking you to a page with a description of the causes and suggested workarounds.

            jglick, Is there any way that something could be done to see how close our pipelines are to the limit? It is very hard to tell when refactoring my pipelines, what refactors will provide a large enough impact.

            From my own experience, creating new stages, adding post-stages, and options seems to have a much larger effect on getting closer to the limit than adding more steps. However, it wish I could measure it.

             

            henryborchers Henry Borchers added a comment - jglick , Is there any way that something could be done to see how close our pipelines are to the limit? It is very hard to tell when refactoring my pipelines, what refactors will provide a large enough impact. From my own experience, creating new stages, adding post-stages, and options seems to have a much larger effect on getting closer to the limit than adding more steps. However, it wish I could measure it.  
            jglick Jesse Glick added a comment -

            Is there any way that something could be done to see how close our pipelines are to the limit?

            Other than trying to run the script? Not that I am aware. A complex series of transformations happens between source code and byte code.

            jglick Jesse Glick added a comment - Is there any way that something could be done to see how close our pipelines are to the limit? Other than trying to run the script? Not that I am aware. A complex series of transformations happens between source code and byte code.

            CpsGroovyShell.reparse could certainly detect this exception message and send you to a jenkins.io redirect link taking you to a page with a description of the causes and suggested workarounds.

            If that's in response to my gripe about this scaling limitation being undocumented, then at that point it is way too late.  Finding this ticket and others (including a Cloudbees KB article) from the error message was not terribly difficult.

            My gripe specifically is my investment into Pipeline (without knowing this limitation) only to have hit this wall and now have to pivot and do something completely different. Like going back to upstream/downstream freestyle jobs, or another CI solution, etc. I frankly have no idea my path forward here, but I am too frequently hitting this wall and having refactor my way out of it. All of the low hanging fruit there is gone now.  My ability to continue to refactor is coming to an end, and I think quite soon.  I'm down to factoring multi-condition when clauses into external functions.  Just about everything in my Jenkinsfile is a single call to an external function.

            Does Matrix solve any of this or is Matrix just a high level construct that compiles down into the same amount of bytecode as writing out a series of parallel stages?

            brianjmurrell Brian J Murrell added a comment - CpsGroovyShell.reparse could certainly detect this exception message and send you to a jenkins.io redirect link taking you to a page with a description of the causes and suggested workarounds. If that's in response to my gripe about this scaling limitation being undocumented, then at that point it is way too late.  Finding this ticket and others (including a Cloudbees KB article) from the error message was not terribly difficult. My gripe specifically is my investment into Pipeline (without knowing this limitation) only to have hit this wall and now have to pivot and do something completely different. Like going back to upstream/downstream freestyle jobs, or another CI solution, etc. I frankly have no idea my path forward here, but I am too frequently hitting this wall and having refactor my way out of it. All of the low hanging fruit there is gone now.  My ability to continue to refactor is coming to an end, and I think quite soon.  I'm down to factoring multi-condition when clauses into external functions.  Just about everything in my Jenkinsfile is a single call to an external function. Does Matrix solve any of this or is Matrix just a high level construct that compiles down into the same amount of bytecode as writing out a series of parallel stages?

            Other than trying to run the script? Not that I am aware. A complex series of transformations happens between source code and byte code.

            Bummer...

            How hard would it be to add something to similar to "pipeline-model-converter/validate" route from the REST API which checks a pipeline to see if it is too large to run?  I get frustrated that I have to commit changes and wait for Jenkins to pick up the job before I know if my changes are within limit.

            henryborchers Henry Borchers added a comment - Other than trying to run the script? Not that I am aware. A complex series of transformations happens between source code and byte code. Bummer... How hard would it be to add something to similar to "pipeline-model-converter/validate" route from the REST API which checks a pipeline to see if it is too large to run?  I get frustrated that I have to commit changes and wait for Jenkins to pick up the job before I know if my changes are within limit.

            I think everyone on these boards is missing the point here...

            Stop using JenkinsFile.  it's not sufficient for corporate CI, never has been.  Just have it call out to a single shell script that takes care of everything else.  Stop spinning your wheels.

            The Jenkins/Hudson folks clearly don't care about larger users.  Move on to a newer CI/CD platform that requires less maintenance.  Who wants another pet to take care of?

            stephentunney Stephen Tunney added a comment - I think everyone on these boards is missing the point here... Stop using JenkinsFile.  it's not sufficient for corporate CI, never has been.  Just have it call out to a single shell script that takes care of everything else.  Stop spinning your wheels. The Jenkins/Hudson folks clearly don't care about larger users.  Move on to a newer CI/CD platform that requires less maintenance.  Who wants another pet to take care of?
            henryborchers Henry Borchers added a comment - - edited

            Stop using JenkinsFile

            No!

             

            it's not sufficient for corporate CI, never has been

            It could be. It does 90% of everything I could want to do and it keeps getting better every day.

             

            Just have it call out to a single shell script that takes care of everything else. 

            The flow control of the Jenkinsfile is very useful for parallelizing tasks without sacrificing human readability. A simple shell script doesn't do that.

             

            Stop spinning your wheels. 

            I will spin my wheels all I like! Thank you very much

             

            henryborchers Henry Borchers added a comment - - edited Stop using JenkinsFile No!   it's not sufficient for corporate CI, never has been It could be. It does 90% of everything I could want to do and it keeps getting better every day.   Just have it call out to a single shell script that takes care of everything else.  The flow control of the Jenkinsfile is very useful for parallelizing tasks without sacrificing human readability. A simple shell script doesn't do that.   Stop spinning your wheels.   I will spin my wheels all I like! Thank you very much  
            bitwiseman Liam Newman added a comment - - edited

            brianjmurrell moglimcgrath amuniz henryborchers gregturner spinus1 smd sgardell

            Please take a look at https://github.com/jenkinsci/pipeline-model-definition-plugin/pull/405.

            If any of you can try this change out to see if it fixes your issues, that would be great.

            This is an experimental change, so please do not install it on production servers. See the warning in the PR.

            To test test update:
            You must still set: org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true

            Install the following incrementals:
            (Was 1.7.3-rc1873.537be530946d but updated)
            https://repo.jenkins-ci.org/incrementals/org/jenkinsci/plugins/pipeline-model-api/1.7.3-rc1872.9504c794d213/
            https://repo.jenkins-ci.org/incrementals/org/jenkinsci/plugins/pipeline-model-definition/1.7.3-rc1872.9504c794d213/
            https://repo.jenkins-ci.org/incrementals/org/jenkinsci/plugins/pipeline-model-extensions/1.7.3-rc1872.9504c794d213/
            https://repo.jenkins-ci.org/incrementals/org/jenkinsci/plugins/pipeline-stage-tags-metadata/1.7.3-rc1872.9504c794d213/

            (Sorry, yes, you need to do all four of them.

            bitwiseman Liam Newman added a comment - - edited brianjmurrell moglimcgrath amuniz henryborchers gregturner spinus1 smd sgardell Please take a look at https://github.com/jenkinsci/pipeline-model-definition-plugin/pull/405 . If any of you can try this change out to see if it fixes your issues, that would be great. This is an experimental change, so please do not install it on production servers. See the warning in the PR. To test test update: You must still set: org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true Install the following incrementals: (Was 1.7.3-rc1873.537be530946d but updated) https://repo.jenkins-ci.org/incrementals/org/jenkinsci/plugins/pipeline-model-api/1.7.3-rc1872.9504c794d213/ https://repo.jenkins-ci.org/incrementals/org/jenkinsci/plugins/pipeline-model-definition/1.7.3-rc1872.9504c794d213/ https://repo.jenkins-ci.org/incrementals/org/jenkinsci/plugins/pipeline-model-extensions/1.7.3-rc1872.9504c794d213/ https://repo.jenkins-ci.org/incrementals/org/jenkinsci/plugins/pipeline-stage-tags-metadata/1.7.3-rc1872.9504c794d213/ (Sorry, yes, you need to do all four of them.

            bitwiseman, This looks exciting. Thank you for putting effort into this. 

             

            I'll try to see if I can get this working in a Docker container but to tell you the truth I'm a little anxious because I'm not exactly sure how.

            henryborchers Henry Borchers added a comment - bitwiseman , This looks exciting. Thank you for putting effort into this.    I'll try to see if I can get this working in a Docker container but to tell you the truth I'm a little anxious because I'm not exactly sure how.
            bitwiseman Liam Newman added a comment -

            henryborchers
            How to get Jenkins working in a Docker container?
            https://batmat.net/2018/09/07/how-to-run-and-upgrade-jenkins-using-the-official-docker-image/ - use "jenkins/jenkins:lts" instead of a specific version.

            Install the HPI files from each of the above links: https://www.jenkins.io/doc/book/managing/plugins/#from-the-web-ui-2 - You can do all four of them and then restart.

            In script console at "Manage Jenkins -> Script Console" paste and run this :
            org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true

            If you restart your Jenkins instance, you'll need to rerun script console setting.

            Then try out your pipeline.

            bitwiseman Liam Newman added a comment - henryborchers How to get Jenkins working in a Docker container? https://batmat.net/2018/09/07/how-to-run-and-upgrade-jenkins-using-the-official-docker-image/ - use "jenkins/jenkins:lts" instead of a specific version. Install the HPI files from each of the above links: https://www.jenkins.io/doc/book/managing/plugins/#from-the-web-ui-2 - You can do all four of them and then restart. In script console at " Manage Jenkins -> Script Console " paste and run this : org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true If you restart your Jenkins instance, you'll need to rerun script console setting. Then try out your pipeline.

            bitwiseman

            I used jenkins/jenkinsfile-runner as the base docker image. Added the hpi files from your links to  /usr/share/jenkins/ref/plugins/ and installed the rest of the required plugins using jenkins-plugin-manager.  I ran  docker with -e JAVA_OPTS="-Dorg.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true"

             

            To be fair, I didn't actually get my jenkinsfile pipeline running. I'm still learning how to use the jenkinsfile-runner but instead of the "Code too Large" errors, I got errors that I didn't have any agents with the correct labels. At least I know that Jenkins was able to load my jenkinsfile without crapping out. I still need configure my dockerfile agent with other docker agents.

             

            henryborchers Henry Borchers added a comment - bitwiseman I used jenkins/jenkinsfile-runner as the base docker image. Added the hpi files from your links to  /usr/share/jenkins/ref/plugins/ and installed the rest of the required plugins using jenkins-plugin-manager.  I ran  docker with -e JAVA_OPTS="-Dorg.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true"   To be fair, I didn't actually get my jenkinsfile pipeline running. I'm still learning how to use the jenkinsfile-runner but instead of the "Code too Large" errors, I got errors that I didn't have any agents with the correct labels. At least I know that Jenkins was able to load my jenkinsfile without crapping out. I still need configure my dockerfile agent with other docker agents.  

            bitwiseman This really does fix it.

            The only way I can easily test this right now is with the jenkins/jenkinsfile-runner docker image. However, I can tell just by switching the current version the plugins with the ones in your PR and setting SCRIPT_SPLITTING_TRANSFORMATION, the jenkinfile pipeline that was too large was able to work.

            henryborchers Henry Borchers added a comment - bitwiseman This really does fix it. The only way I can easily test this right now is with the jenkins/jenkinsfile-runner docker image. However, I can tell just by switching the current version the plugins with the ones in your PR and setting SCRIPT_SPLITTING_TRANSFORMATION, the jenkinfile pipeline that was too large was able to work.
            bitwiseman Liam Newman added a comment -

            henryborchers
            Excellent!
            I'm hoping for feedback from more folks such as brianjmurrell before I release this.

            bitwiseman Liam Newman added a comment - henryborchers Excellent! I'm hoping for feedback from more folks such as brianjmurrell before I release this.
            jglick Jesse Glick added a comment -

            To be clear, the proposed fix applies only to Declarative Pipeline.

            jglick Jesse Glick added a comment - To be clear, the proposed fix applies only to Declarative Pipeline.

            I don't have a pipeline exhibiting this problem any more, since my last occurrence and the refactoring[1] I did to resolve it.  That may not last for long though, as new stages are always being added.  I can't say how soon that will be though.

            Ultimately, does this further enhancement of SCRIPT_SPLITTING_TRANSFORMATION still result in a wall where the Jenkinsfile can be once again too big, or does this new mechanism just split as much as is necessary to accommodate any size Jenkinsfile?

            Could the change here make things any worse?  If not, going forward with it is a wash at worst then, yes?

             

            [1]  This time it was moving multi-condition when clauses into functions to simplify the when blocks – causing more unnecessary indirection, IMHO.  Reading my Jenkinsfile is an exercise in jumping all around the Jenkinsfile (to see the value of functions used solely to reduce the pipeline block size, not implement any DRY) file and back and forth between repos (pipeline libraries), etc., which is very annoying).

            brianjmurrell Brian J Murrell added a comment - I don't have a pipeline exhibiting this problem any more, since my last occurrence and the refactoring [1] I did to resolve it.  That may not last for long though, as new stages are always being added.  I can't say how soon that will be though. Ultimately, does this further enhancement of SCRIPT_SPLITTING_TRANSFORMATION still result in a wall where the Jenkinsfile can be once again too big, or does this new mechanism just split as much as is necessary to accommodate any size Jenkinsfile ? Could the change here make things any worse?  If not, going forward with it is a wash at worst then, yes?   [1]   This time it was moving multi-condition when clauses into functions to simplify the when blocks – causing more unnecessary indirection, IMHO.  Reading my Jenkinsfile  is an exercise in jumping all around the Jenkinsfile (to see the value of functions used solely to reduce the pipeline block size, not implement any DRY) file and back and forth between repos (pipeline libraries), etc., which is very annoying).
            bitwiseman Liam Newman added a comment - - edited

            brianjmurrell
            There will always be a wall. The limitations in class size are hard coded into the Java Class file format.
            However, this improvement moves the the wall exponentially further out - similar to going from 16-bit integer to 32-bit integer. It is a massive improvement.

            Even if you are not encountering the issue currently, it would be helpful if you tried this new version to make sure it didn't break anything. Further, you could try reverting the last change that you made to your Jenkinsfile to mitigate this and see if it still works. The only change you might need to make is adding "@Field" to script local variable declarations (def varName="value" in the root of the script).

            bitwiseman Liam Newman added a comment - - edited brianjmurrell There will always be a wall. The limitations in class size are hard coded into the Java Class file format. However, this improvement moves the the wall exponentially further out - similar to going from 16-bit integer to 32-bit integer. It is a massive improvement. Even if you are not encountering the issue currently, it would be helpful if you tried this new version to make sure it didn't break anything. Further, you could try reverting the last change that you made to your Jenkinsfile to mitigate this and see if it still works. The only change you might need to make is adding "@Field" to script local variable declarations (def varName="value" in the root of the script).

            I don't have anything to revert.  I'd never commit a Jenkinsfile that doesn't run in Jenkins.  I wouldn't have the approvals to land such a patch.

            So the last time I ran into this, was when I added a stage or two but in the same commit I also refactored to allow the new stage(s) to fit.

            I'm also not sure when my priorities at my day job will allow me time to stand up a non-production Jenkins server to try this out in.  When I do find the time, I will be sure to update here.

            brianjmurrell Brian J Murrell added a comment - I don't have anything to revert.  I'd never commit a Jenkinsfile that doesn't run in Jenkins.  I wouldn't have the approvals to land such a patch. So the last time I ran into this, was when I added a stage or two but in the same commit I also refactored to allow the new stage(s) to fit. I'm also not sure when my priorities at my day job will allow me time to stand up a non-production Jenkins server to try this out in.  When I do find the time, I will be sure to update here.

            I thought I'd add that I tested these changes with my skeleton script that reproduced the error for us and it seems to be working. I also can't make these changes to our main Jenkins instance, but I used my docker setup that I have for reproducing errors.

            Previously, I had narrowed down the cause for us to be the number of stages with "when" conditionals. When we get somewhere between 30 and 35 stages with "when" expressions, the error shows up, regardless of any other code in the pipeline (I was able to reproduce with a blank pipeline library with just echo lines).

            I installed the plugins and activated SCRIPT_SPLITTING_TRANSFORMATION, and now I've been able to run the same script with 60 stages without hitting the error. I might be able to go higher, but our use case is far from hitting that many stages.

            I do want to say thanks for keeping this issue active. We've been running a workaround script for a while now but I've been keeping my eye on progress on this issue, and it looks promising so far. I'm anxious to get back to a pure declarative implementation.

            mbrunton27 Matthew Brunton added a comment - I thought I'd add that I tested these changes with my skeleton script that reproduced the error for us and it seems to be working. I also can't make these changes to our main Jenkins instance, but I used my docker setup that I have for reproducing errors. Previously, I had narrowed down the cause for us to be the number of stages with "when" conditionals. When we get somewhere between 30 and 35 stages with "when" expressions, the error shows up, regardless of any other code in the pipeline (I was able to reproduce with a blank pipeline library with just echo lines). I installed the plugins and activated SCRIPT_SPLITTING_TRANSFORMATION, and now I've been able to run the same script with 60 stages without hitting the error. I might be able to go higher, but our use case is far from hitting that many stages. I do want to say thanks for keeping this issue active. We've been running a workaround script for a while now but I've been keeping my eye on progress on this issue, and it looks promising so far. I'm anxious to get back to a pure declarative implementation.

            Previously, I had narrowed down the cause for us to be the number of stages with "when" conditionals. When we get somewhere between 30 and 35 stages with "when" expressions, the error shows up, regardless of any other code in the pipeline (I was able to reproduce with a blank pipeline library with just echo lines).

            By suspicion here is that the complexity of the when conditions adds to the amount of bytecode generated, contributing to the Method code too large situation. I moved all of my multi-condition tests into functions so that all of my when conditions are a single call to the function wrapping their actual multi-condition tests.

            I'm anxious to get back to a pure declarative implementation.

            Indeed. Without unnecessary indirection through functions that have no DRY purpose whatsoever and exist solely to reduce the size of the Method code.

            brianjmurrell Brian J Murrell added a comment - Previously, I had narrowed down the cause for us to be the number of stages with "when" conditionals. When we get somewhere between 30 and 35 stages with "when" expressions, the error shows up, regardless of any other code in the pipeline (I was able to reproduce with a blank pipeline library with just echo lines). By suspicion here is that the complexity of the when conditions adds to the amount of bytecode generated, contributing to the Method code too large situation. I moved all of my multi-condition tests into functions so that all of my when conditions are a single call to the function wrapping their actual multi-condition tests. I'm anxious to get back to a pure declarative implementation. Indeed. Without unnecessary indirection through functions that have no DRY purpose whatsoever and exist solely to reduce the size of the Method code .
            doman18 Doman Panda added a comment - - edited

            I have couple questions about workarrounds:

            1. I saw that many recommend using sharing libraries. How it is different from using functions from the same file but outside of pipelines{} section? 
            2. Some also sugested me that separating functions in Jenkinsfile works only if you wrap arround pipeline{} section with with call() function like this - call(){pipeline{...}}. Is it true?
            3. Is it me or using matrix{} greatly rises the risk of getting such error? I mean it seems to me that i can have much larger pipelines when im not using them. Or is it because i use when{} more?
            4. Does things like number of variables, maps (arrays) or objects defined outside of pipeline script have an impact to this problem?
            5. Some say that using scripting (imperative) syntax does not trigger this problem. Ive never use it. Is it worth to learn it and introduce it in projects? 

            Im asking about those because I really hesitate to use share library solution. Most of my functions are not universal and doesnt make sense for any other projects. Also i use mutlibranch jobs a lot and cant imagine how static libs can work with dynamic branches when building process is strictly co-related with development process (Jenkinsfile changes with code development) and thus cant be separated. Change in code would have to reflect also in shared library. For example when developers add new compilation target, new matrix axis is being added to Jenkinsfile. And sometimes new section. How would this work in multibranch environment and shared library soultion where some branches work with new Jenkinsfile and some still have to be build old way ?

            doman18 Doman Panda added a comment - - edited I have couple questions about workarrounds: I saw that many recommend using sharing libraries. How it is different from using functions from the same file but outside of pipelines{} section?  Some also sugested me that separating functions in Jenkinsfile works only if you wrap arround pipeline{} section with with call() function like this - call(){pipeline{...}}. Is it true? Is it me or using matrix{} greatly rises the risk of getting such error? I mean it seems to me that i can have much larger pipelines when im not using them. Or is it because i use when{} more? Does things like number of variables, maps (arrays) or objects defined outside of pipeline script have an impact to this problem? Some say that using scripting (imperative) syntax does not trigger this problem. Ive never use it. Is it worth to learn it and introduce it in projects?  Im asking about those because I really hesitate to use share library solution. Most of my functions are not universal and doesnt make sense for any other projects. Also i use mutlibranch jobs a lot and cant imagine how static libs can work with dynamic branches when building process is strictly co-related with development process (Jenkinsfile changes with code development) and thus cant be separated. Change in code would have to reflect also in shared library. For example when developers add new compilation target, new matrix axis is being added to Jenkinsfile. And sometimes new section. How would this work in multibranch environment and shared library soultion where some branches work with new Jenkinsfile and some still have to be build old way ?
            bitwiseman Liam Newman added a comment -

            1. I saw that many recommend using sharing libraries. How it is different from using functions from the same file but outside of pipelines{} section?

            The underlying code is completely different. For example, functions in the same are internally part of the class for that script, whereas shared library functions are in their own classes.

            2. Some also sugested me that separating functions in Jenkinsfile works only if you wrap arround pipeline{} section with with call() function like this - call(){pipeline{...}}. Is it true?

            I have no idea what syntax you are referring to. Do you mean putting the pipeline in a shared library?

            3. s it me or using matrix{} greatly rises the risk of getting such error? I mean it seems to me that i can have much larger pipelines when im not using them. Or is it because i use when{} more?

            No, matrix doesn't cause this, it only makes it easier to run into this. If you create a the same pipeline manually as what is generated using matrix, you'd get the same issue. But you would also have much longer and repetitive Jenkinsfile.

            4. Does things like number of variables, maps (arrays) or objects defined outside of pipeline script have an impact to this problem?

            Those things do not cause this problem, but their presence can make it harder for the declarative engine to mitigate this problem.

            5. Some say that using scripting (imperative) syntax does not trigger this problem. Ive never use it. Is it worth to learn it and introduce it in projects?

            This is false. Scripted pipeline syntax can also encounter this issue, but it is less common due to there not being an extra layer like there is in Declarative. However, when scripted pipeline do encounter this problem, it is purely up to the writers of that script to workaround the problem. In Declarative, I have been able to process the pipeline code to transparently workaround the issue in many cases (with SCRIPT_SPLITTING_TRANSFORMATION).

            bitwiseman Liam Newman added a comment - 1. I saw that many recommend using sharing libraries. How it is different from using functions from the same file but outside of pipelines{} section? The underlying code is completely different. For example, functions in the same are internally part of the class for that script, whereas shared library functions are in their own classes. 2. Some also sugested me that separating functions in Jenkinsfile works only if you wrap arround pipeline{} section with with call() function like this - call(){pipeline{...}}. Is it true? I have no idea what syntax you are referring to. Do you mean putting the pipeline in a shared library? 3. s it me or using matrix{} greatly rises the risk of getting such error? I mean it seems to me that i can have much larger pipelines when im not using them. Or is it because i use when{} more? No, matrix doesn't cause this, it only makes it easier to run into this. If you create a the same pipeline manually as what is generated using matrix, you'd get the same issue. But you would also have much longer and repetitive Jenkinsfile. 4. Does things like number of variables, maps (arrays) or objects defined outside of pipeline script have an impact to this problem? Those things do not cause this problem, but their presence can make it harder for the declarative engine to mitigate this problem. 5. Some say that using scripting (imperative) syntax does not trigger this problem. Ive never use it. Is it worth to learn it and introduce it in projects? This is false. Scripted pipeline syntax can also encounter this issue, but it is less common due to there not being an extra layer like there is in Declarative. However, when scripted pipeline do encounter this problem, it is purely up to the writers of that script to workaround the problem. In Declarative, I have been able to process the pipeline code to transparently workaround the issue in many cases (with SCRIPT_SPLITTING_TRANSFORMATION).
            jenkinsneveragain Paweł added a comment - - edited

            Greetings,

            Getting the error by the the sheer amount of "when" in pipeline.
            Test pipeline with 35 booleanParam and 35 stages with " when {expresssion {return{params.Foo}}
            I tested Jenkins 2.235.5 and plugins in version 1.7.1.

            I installed
            https://repo.jenkins-ci.org/incrementals/org/jenkinsci/plugins/pipeline-model-api/1.7.3-rc1872.9504c794d213/
            https://repo.jenkins-ci.org/incrementals/org/jenkinsci/plugins/pipeline-model-definition/1.7.3-rc1872.9504c794d213/
            https://repo.jenkins-ci.org/incrementals/org/jenkinsci/plugins/pipeline-model-extensions/1.7.3-rc1872.9504c794d213/
            https://repo.jenkins-ci.org/incrementals/org/jenkinsci/plugins/pipeline-stage-tags-metadata/1.7.3-rc1872.9504c794d213

            then run
            org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true
            and getting the new error:

            org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
            General error during semantic analysis: SCRIPT_SPLITTING_TRANSFORMATION is incompatible with local variable declarations. Add the the '@Field' annotation to local variable declarations: org.codehaus.groovy.ast.expr.DeclarationExpression@26fafdbf[org.codehaus.groovy.ast.expr.VariableExpression@49600128[variable: failedStages]("=" at 1:1:  "=" )org.codehaus.groovy.ast.expr.ListExpression@5b30fe0e[]].
            

            errorIncomaptiblewithlocalvar.txt

            jenkinsneveragain Paweł added a comment - - edited Greetings, Getting the error by the the sheer amount of "when" in pipeline. Test pipeline with 35 booleanParam and 35 stages with " when {expresssion {return{params.Foo}} I tested Jenkins 2.235.5 and plugins in version 1.7.1. I installed https://repo.jenkins-ci.org/incrementals/org/jenkinsci/plugins/pipeline-model-api/1.7.3-rc1872.9504c794d213/ https://repo.jenkins-ci.org/incrementals/org/jenkinsci/plugins/pipeline-model-definition/1.7.3-rc1872.9504c794d213/ https://repo.jenkins-ci.org/incrementals/org/jenkinsci/plugins/pipeline-model-extensions/1.7.3-rc1872.9504c794d213/ https://repo.jenkins-ci.org/incrementals/org/jenkinsci/plugins/pipeline-stage-tags-metadata/1.7.3-rc1872.9504c794d213 then run org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true and getting the new error: org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed: General error during semantic analysis: SCRIPT_SPLITTING_TRANSFORMATION is incompatible with local variable declarations. Add the the '@Field' annotation to local variable declarations: org.codehaus.groovy.ast.expr.DeclarationExpression@26fafdbf[org.codehaus.groovy.ast.expr.VariableExpression@49600128[variable: failedStages]( "=" at 1:1: "=" )org.codehaus.groovy.ast.expr.ListExpression@5b30fe0e[]]. errorIncomaptiblewithlocalvar.txt
            sodul Stephane Odul added a comment -

            We use the declarative pipeline and our main CI pipeline is close to 800 lines with 30 parallel stages all with when clauses.

            Since we use Kubernetes each stage spins its own pod and we have a shared jenkins library to allow to simplify the pod definitions as well as running the individual steps.

            The SCRIPT_SPLITTING_TRANSFORMATION  flag did nothing noticeable.

            As a workaround we tried to group our parallel stage to share the when statements but nesting parallel stages inside parallel stages is not allowed.

            Short of creating a sub-pipeline per parallel group I'm not really seeing a way out of this problem. This is annoying since it will probably add a couple of minutes to our pipelines and we'll have to track and copy test results files between pipelines.

            This seem to be a very big design flaw of the declarative pipelines where the jvm limitations are impacting the ability to use a DSL.

            In the absolute short term we will stop creating more parallel stages which will slow down the productivity of our engineering organization.

            Considering this bug is several years old and seems to impact a lot of organizations, it would be good if the documentation could inform about this problem and warn about what the limits are when using declarative pipelines.

            sodul Stephane Odul added a comment - We use the declarative pipeline and our main CI pipeline is close to 800 lines with 30 parallel stages all with when clauses. Since we use Kubernetes each stage spins its own pod and we have a shared jenkins library to allow to simplify the pod definitions as well as running the individual steps. The SCRIPT_SPLITTING_TRANSFORMATION  flag did nothing noticeable. As a workaround we tried to group our parallel stage to share the when statements but nesting parallel stages inside parallel stages is not allowed. Short of creating a sub-pipeline per parallel group I'm not really seeing a way out of this problem. This is annoying since it will probably add a couple of minutes to our pipelines and we'll have to track and copy test results files between pipelines. This seem to be a very big design flaw of the declarative pipelines where the jvm limitations are impacting the ability to use a DSL. In the absolute short term we will stop creating more parallel stages which will slow down the productivity of our engineering organization. Considering this bug is several years old and seems to impact a lot of organizations, it would be good if the documentation could inform about this problem and warn about what the limits are when using declarative pipelines.
            bishoy_maurice Bishoy added a comment -

            That's horrible, should we expect fixing this soon?

            bishoy_maurice Bishoy added a comment - That's horrible, should we expect fixing this soon?
            bitwiseman Liam Newman added a comment - - edited
            Note from the Maintainers

            Please upgrade to at least v1.8.3 or greater and try the feature flag in the description before commenting on this issue.

            jenkinsneveragain
            Did you try what the error suggested? It is pretty specific.

            sodul
            I'm surprised script splitting had no effect.
            Your pipeline still in your Jenkinsfile, right?
            And pipeline is the only thing declared in your Jenkinsfile?
            Could you try this again with the latest release?

            bitwiseman Liam Newman added a comment - - edited Note from the Maintainers Please upgrade to at least v1.8.3 or greater and try the feature flag in the description before commenting on this issue. jenkinsneveragain Did you try what the error suggested? It is pretty specific. sodul I'm surprised script splitting had no effect. Your pipeline still in your Jenkinsfile, right? And pipeline is the only thing declared in your Jenkinsfile? Could you try this again with the latest release?
            sodul Stephane Odul added a comment - - edited

            bitwiseman I missed the version requirement. We have:

            • pipeline-build-step:2.13
            • pipeline-github-lib:1.0
            • pipeline-graph-analysis:1.10
            • pipeline-input-step:2.12
            • pipeline-milestone-step:1.3.1
            • pipeline-model-api:1.7.2
            • pipeline-model-definition:1.7.2
            • pipeline-model-extensions:1.7.2
            • pipeline-rest-api:2.18
            • pipeline-stage-step:2.5
            • pipeline-stage-tags-metadata:1.7.2
            • pipeline-stage-view:2.18
            • pipeline-utility-steps:2.6.1

            We are on Jenkins 2.263.3 LTS and we are encountering an other issue that prevents any job from starting when we update some random plugins. So I'm a little worried about upgrading until JENKINS-64727 is addressed.

            I will try to upgrade over the weekend to minimize potential outages for our internal developers, since the bug is random and not reliably reproducible on other instances.

            As far as the pipeline is concerned we start it with this:

            Map pr_focus = [:]
            String prepare_uuid = UUID.randomUUID().toString().take(8)
            pipeline {
              agent none
              stages {
                stage ('Prepare') {
                    agent {
                        kubernetes {
                            label "prepare-ci-${prepare_uuid}"

            This uuid is for the kubernets plugin later on since our agent definitions need to have a guaranteed unique id. We can probably get a uuid from our library though.

            The Map is a list of stage groups that we enable/disable based on what files have changed. This allow us to skip stages for our PRs if the tests would not be relevant based on the diff. For example if only Python code has changed we don't need to run Golang unittests.

                        steps {
                            script {
                                prepare()
                                sh "jenkins/pr_changes.sh"
                                container('python') {
                                    sh "jenkins/pr_focus.py > pr_focus.txt"
                                }
                                pr_focus = readProperties(file: 'pr_focus.txt')
                                echo "pr_focus: ${pr_focus}"
                            }
                        }
            

            Then later:

                            stage('Go vet') {
                                when {
                                    not { equals expected: '1', actual: pr_focus.SKIP_GO_STAGES }
                                    beforeAgent true
                                }
            

            I think we had to declare the map at the top level to ensure the values would be available to all stages, but if you have a recommendation on an other approach we are open to try that.
             

            sodul Stephane Odul added a comment - - edited bitwiseman  I missed the version requirement. We have: pipeline-build-step:2.13 pipeline-github-lib:1.0 pipeline-graph-analysis:1.10 pipeline-input-step:2.12 pipeline-milestone-step:1.3.1 pipeline-model-api:1.7.2 pipeline-model-definition:1.7.2 pipeline-model-extensions:1.7.2 pipeline-rest-api:2.18 pipeline-stage-step:2.5 pipeline-stage-tags-metadata:1.7.2 pipeline-stage-view:2.18 pipeline-utility-steps:2.6.1 We are on Jenkins 2.263.3 LTS and we are encountering an other issue that prevents any job from starting when we update some random plugins. So I'm a little worried about upgrading until  JENKINS-64727 is addressed. I will try to upgrade over the weekend to minimize potential outages for our internal developers, since the bug is random and not reliably reproducible on other instances. As far as the pipeline is concerned we start it with this: Map pr_focus = [:] String prepare_uuid = UUID.randomUUID().toString().take(8) pipeline {   agent none stages { stage ('Prepare') { agent { kubernetes { label "prepare-ci-${prepare_uuid}" This uuid is for the kubernets plugin later on since our agent definitions need to have a guaranteed unique id. We can probably get a uuid from our library though. The Map is a list of stage groups that we enable/disable based on what files have changed. This allow us to skip stages for our PRs if the tests would not be relevant based on the diff. For example if only Python code has changed we don't need to run Golang unittests. steps { script { prepare() sh "jenkins/pr_changes.sh" container('python') { sh "jenkins/pr_focus.py > pr_focus.txt" } pr_focus = readProperties(file: 'pr_focus.txt') echo "pr_focus: ${pr_focus}" } } Then later: stage('Go vet') { when { not { equals expected: '1', actual: pr_focus.SKIP_GO_STAGES } beforeAgent true } I think we had to declare the map at the top level to ensure the values would be available to all stages, but if you have a recommendation on an other approach we are open to try that.  
            bitwiseman Liam Newman added a comment -

            Ah, I see. The reason script splitting didn't work was because it silently disabled itself when it saw any other expressions in the Jenkinsfile outside of pipeline.

            The new version v1.8.2 allows other expressions, but not bare variable declarations and will throw an informative error rather than silently attempt to continue running with script splitting disabled. In v1.8.2 with script splitting enabled, variable declarations such as Map pr_focus = [:] and String prepare_uuid = UUID.randomUUID().toString().take(8) need to have @Field annotation added to them.

            So, your Jenkinsfile would look like:

            @Field
            Map pr_focus = [:]
            
            @Field
            String prepare_uuid = UUID.randomUUID().toString().take(8)
            
            pipeline { ... }
            
            bitwiseman Liam Newman added a comment - Ah, I see. The reason script splitting didn't work was because it silently disabled itself when it saw any other expressions in the Jenkinsfile outside of pipeline . The new version v1.8.2 allows other expressions, but not bare variable declarations and will throw an informative error rather than silently attempt to continue running with script splitting disabled. In v1.8.2 with script splitting enabled, variable declarations such as Map pr_focus = [:] and String prepare_uuid = UUID.randomUUID().toString().take(8) need to have @Field annotation added to them. So, your Jenkinsfile would look like: @Field Map pr_focus = [:] @Field String prepare_uuid = UUID.randomUUID().toString().take(8) pipeline { ... }

            bitwiseman, sorry for bothering you

            currently I have version 1.8.2, is that mean that SCRIPT_SPLITTING_TRANSFORMATION flag is enabled by default?

            Regarding https://github.com/jenkinsci/pipeline-model-definition-plugin/releases/tag/pipeline-model-definition-1.8.0

            experimental feature that could be activated by setting SCRIPT_SPLITTING_TRANSFORMATION=true

            So, I suspect it is should be disabled by default?

            Currently I'm able to use declared variables outside of `pipeline` block for all stages,

            except these ones which are in `matrix` definition (for these I used `@Field`), that's weird. Is it expected behavior?

            Any recommendation of defining global variables (strings, maps) for Declarative pipelines (in case of some var should be used by several stages)?

            moskovych Oleh Moskovych added a comment - bitwiseman , sorry for bothering you currently I have version 1.8.2, is that mean that SCRIPT_SPLITTING_TRANSFORMATION flag is enabled by default? Regarding https://github.com/jenkinsci/pipeline-model-definition-plugin/releases/tag/pipeline-model-definition-1.8.0 experimental feature that could be activated by setting SCRIPT_SPLITTING_TRANSFORMATION=true So, I suspect it is should be disabled by default? Currently I'm able to use declared variables outside of `pipeline` block for all stages, except these ones which are in `matrix` definition (for these I used `@Field`), that's weird. Is it expected behavior? Any recommendation of defining global variables (strings, maps) for Declarative pipelines (in case of some var should be used by several stages)?
            jenkinsneveragain Paweł added a comment - - edited

             

            bitwiseman

            Paweł
            Did you try what the error suggested? It is pretty specific.  

            No, I was not sure and I was testing it in the evening on the production so I've had moved to another workaround quickly.
            https://code-held.com/2020/01/22/jenkins-local-shared-library/ 
            I test it locally and then when implenitng it on prod I've had noticed a method displaying jenkins build status ( build abc is OK).
            I've removed it and replaced by jenkins build in things and testing team is not complying to me on the missing "status OK" method so far

            def failedStages = []  <-- I removed it
            
            pipeline {
                agent none
            
            
            
                                               failedStages.add(env.FAILURE_STAGE)
            
            #removed          
                  stage('Results') {
                                steps {
                                    script {
                                        if (failedStages.isEmpty()) {
                                            echo("${env.JOB_NAME} - OK")
                                        } else {
                                            echo(abc.getMessage(failedStages))
                                        }
                                    }
                                }
                            }
                       mattermostNotify(currentBuild.result, abc.getMessage(failedStages), 'ABC')that

            replaced by

                            mattermostNotify("${currentBuild.currentResult}", "Build failed at stage: ${env.FAILURE_STAGE}\nReason: ${env.FAILURE_REASON}", ABC')
            

             

             

            jenkinsneveragain Paweł added a comment - - edited   bitwiseman Paweł Did you try what the error suggested? It is pretty specific. No, I was not sure and I was testing it in the evening on the production so I've had moved to another workaround quickly. https://code-held.com/2020/01/22/jenkins-local-shared-library/   I test it locally and then when implenitng it on prod I've had noticed a method displaying jenkins build status ( build abc is OK). I've removed it and replaced by jenkins build in things and testing team is not complying to me on the missing "status OK" method so far def failedStages = [] <-- I removed it pipeline { agent none failedStages.add(env.FAILURE_STAGE) #removed stage( 'Results' ) { steps { script { if (failedStages.isEmpty()) { echo( "${env.JOB_NAME} - OK" ) } else { echo(abc.getMessage(failedStages)) } } } } mattermostNotify(currentBuild.result, abc.getMessage(failedStages), 'ABC' )that replaced by mattermostNotify( "${currentBuild.currentResult}" , "Build failed at stage: ${env.FAILURE_STAGE}\nReason: ${env.FAILURE_REASON}" , ABC')    
            bitwiseman Liam Newman added a comment -

            moskovych
            Yes, it is disabled by default.

            jenkinsneveragain
            I'm not sure I understand what you're doing there, but it seems unrelated to this issue.
            The error said: "Add the '@Field' annotation to local variable declarations" . Is there some other way this could be said that would be more clear?

            bitwiseman Liam Newman added a comment - moskovych Yes, it is disabled by default. jenkinsneveragain I'm not sure I understand what you're doing there, but it seems unrelated to this issue. The error said: "Add the '@Field' annotation to local variable declarations" . Is there some other way this could be said that would be more clear?

            bitwiseman, ok, so, can you explain this please:

            I'm able to use declared variables outside of `pipeline` block for all stages,

            except these ones which are in `matrix` definition (for these I used `@Field`).

            Is matrix has different logic?

             

            And again: Any recommendation of defining global variables (strings, maps) for Declarative pipelines (in case of some var should be used by several stages)? Documentation?

            moskovych Oleh Moskovych added a comment - bitwiseman , ok, so, can you explain this please: I'm able to use declared variables outside of `pipeline` block for all stages, except these ones which are in `matrix` definition (for these I used `@Field`). Is matrix has different logic?   And again: Any recommendation of defining global variables (strings, maps) for Declarative pipelines (in case of some var should be used by several stages)? Documentation?
            sodul Stephane Odul added a comment -

            bitwiseman After adding @Field we got:

            00:00:04.555  WorkflowScript: 42: unable to resolve class Field ,  unable to find class for annotation
            

            With the following plugins:

            - pipeline-build-step:2.13
            - pipeline-github-lib:1.0
            - pipeline-graph-analysis:1.10
            - pipeline-input-step:2.12
            - pipeline-milestone-step:1.3.2
            - pipeline-model-api:1.8.3
            - pipeline-model-definition:1.8.3
            - pipeline-model-extensions:1.8.3
            - pipeline-rest-api:2.19
            - pipeline-stage-step:2.5
            - pipeline-stage-tags-metadata:1.8.3
            - pipeline-stage-view:2.19
            - workflow-aggregator:2.6
            - workflow-api:2.40
            - workflow-basic-steps:2.22
            - workflow-cps:2.87
            - workflow-cps-global-lib:2.17
            - workflow-durable-task-step:2.36
            - workflow-job:2.40
            - workflow-multibranch:2.22
            - workflow-scm-step:2.11
            - workflow-step-api:2.23
            - workflow-support:3.7
            

            Am I missing something? Do you have a full example of a declarative pipeline that uses the `@Field` annotation?

            sodul Stephane Odul added a comment - bitwiseman After adding @Field  we got: 00:00:04.555 WorkflowScript: 42: unable to resolve class Field , unable to find class for annotation With the following plugins: - pipeline-build-step:2.13 - pipeline-github-lib:1.0 - pipeline-graph-analysis:1.10 - pipeline-input-step:2.12 - pipeline-milestone-step:1.3.2 - pipeline-model-api:1.8.3 - pipeline-model-definition:1.8.3 - pipeline-model-extensions:1.8.3 - pipeline-rest-api:2.19 - pipeline-stage-step:2.5 - pipeline-stage-tags-metadata:1.8.3 - pipeline-stage-view:2.19 - workflow-aggregator:2.6 - workflow-api:2.40 - workflow-basic-steps:2.22 - workflow-cps:2.87 - workflow-cps-global-lib:2.17 - workflow-durable-task-step:2.36 - workflow-job:2.40 - workflow-multibranch:2.22 - workflow-scm-step:2.11 - workflow-step-api:2.23 - workflow-support:3.7 Am I missing something? Do you have a full example of a declarative pipeline that uses the `@Field` annotation?
            moskovych Oleh Moskovych added a comment - - edited

            sodul, in my case I will needed to add one `import` on the top of file to be able to use it:

            import groovy.transform.Field
            

            and then define this annotation:

            @Field Map dockerParameters = [...]
            
            moskovych Oleh Moskovych added a comment - - edited sodul , in my case I will needed to add one `import` on the top of file to be able to use it: import groovy.transform.Field and then define this annotation: @Field Map dockerParameters = [...]
            sodul Stephane Odul added a comment -

            Thanks moskovych it worked perfectly!

            bitwiseman to answer your question about how to handle the error message better. I recommend you put an explicitly spelled out example of a pipeline with the @Field notation and the required import in the documentation as many of us are not groovy experts. The error message should contain a short link to the documentation so we can clearly see how to implement the workaround.

            sodul Stephane Odul added a comment - Thanks moskovych it worked perfectly! bitwiseman to answer your question about how to handle the error message better. I recommend you put an explicitly spelled out example of a pipeline with the @Field notation and the required import in the documentation as many of us are not groovy experts. The error message should contain a short link to the documentation so we can clearly see how to implement the workaround.
            sodul Stephane Odul added a comment -

            bitwiseman we ran into a bit of an issue, which was a facepalm for me in insight. Adding the @Field annotation worked well but now the other branches (we have hundred of branches) that do not have the new annotation are failing.

            I was thinking that the new flag could be behaving in a backward compatible mode. Instead of plan out failing when the @Field notation is missing you could write a warning and fallback onto the existing behavior. This way all Jenkinsfiles that were not priorly failing will keep on working.

            sodul Stephane Odul added a comment - bitwiseman we ran into a bit of an issue, which was a facepalm for me in insight. Adding the @Field annotation worked well but now the other branches (we have hundred of branches) that do not have the new annotation are failing. I was thinking that the new flag could be behaving in a backward compatible mode. Instead of plan out failing when the @Field notation is missing you could write a warning and fallback onto the existing behavior. This way all Jenkinsfiles that were not priorly failing will keep on working.
            bitwiseman Liam Newman added a comment -

            moskovych
            You'll need to provide an example.

            sodul
            Thanks for the feedback. In the final version, I'll definitely do that.
            You can set "org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_ALLOW_LOCAL_VARIABLES=true" . Your fixed pipeline that uses "@Field" will still use the newer/better script splitting and the other pipelines will start working again. FYI, I know this is annoying but it had to be done this way. People were complaining that "script splitting isn't working" without taking the time to read that it doesn't work with local declared variables. This way anyone not using local declared variables (which are not recommended anyway) gets the best possible behavior and any who is using them gets clear feedback about their choices. That feedback needs improvement but it is better than silently not doing what the user has asked for by providing this flag.

            bitwiseman Liam Newman added a comment - moskovych You'll need to provide an example. sodul Thanks for the feedback. In the final version, I'll definitely do that. You can set "org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_ALLOW_LOCAL_VARIABLES=true" . Your fixed pipeline that uses "@Field" will still use the newer/better script splitting and the other pipelines will start working again. FYI, I know this is annoying but it had to be done this way. People were complaining that "script splitting isn't working" without taking the time to read that it doesn't work with local declared variables. This way anyone not using local declared variables (which are not recommended anyway) gets the best possible behavior and any who is using them gets clear feedback about their choices. That feedback needs improvement but it is better than silently not doing what the user has asked for by providing this flag.
            sodul Stephane Odul added a comment -

            bitwiseman
            Some of our pipelines, on an other Jenkins instance are calling other pipelines. Since we need to pass along parameters we have variables such as this before the `pipeline {}` section.

            @Field List parameters = [
                gitParameter(name: 'BRANCH', value: params.BRANCH),
                booleanParam(name: 'SKIP', defaultValue: false)
            ]

            We then have several stages that get the parameters passed around.

                                when { expression { params.SKIP == false } }
                                steps {
                                    build job: 'other', propagate: true, wait: true, parameters: parameters
                                }
            

            Unfortunately we get an exception thrown apparently on params:
            groovy.lang.MissingPropertyException: No such property: params for class: groovy.lang.Binding

            We tried using `env` but that does not seem to be available either.

            This is not something we can easily move to our shared library since the list of parameters is specific to each of these piplines

            sodul Stephane Odul added a comment - bitwiseman Some of our pipelines, on an other Jenkins instance are calling other pipelines. Since we need to pass along parameters we have variables such as this before the `pipeline {}` section. @Field List parameters = [ gitParameter(name: 'BRANCH', value: params.BRANCH), booleanParam(name: 'SKIP', defaultValue: false) ] We then have several stages that get the parameters passed around. when { expression { params.SKIP == false } } steps { build job: 'other', propagate: true, wait: true, parameters: parameters } Unfortunately we get an exception thrown apparently on params : groovy.lang.MissingPropertyException: No such property: params for class: groovy.lang.Binding We tried using `env` but that does not seem to be available either. This is not something we can easily move to our shared library since the list of parameters is specific to each of these piplines
            moskovych Oleh Moskovych added a comment - - edited

            bitwiseman, ok, here is small example of my pipeline:

             

            #!/usr/bin/env groovy
            
            //library("jenkins_shared_library@1.0.0")
            
            //@groovy.transform.Field
            String resourcePrefix = new Date().getTime().toString()
            
            //@groovy.transform.Field
            Map dockerParameters = [
                registry: "docker.example.com",
                registryType: "internal",
                images: [
                    image1: [image: "image1", dockerfile: "Dockerfile1"],
                    image2: [image: "image2", dockerfile: "Dockerfile2"]
                ]
            ]
            
            pipeline {
              agent any
              options { skipDefaultCheckout true }
              parameters {
                booleanParam defaultValue: true, description: 'Build & Push image1', name: 'image1'
                booleanParam defaultValue: true, description: 'Build & Push image2', name: 'image2'
              }
            
              stages {
                stage("Prepare") {
                  options { skipDefaultCheckout true }
                  failFast true
                  parallel {
                    stage('Test1') {
                      steps {
                        // All variables available in simple stages and parallel blocks
                        echo "resourcePrefix: ${resourcePrefix}"
                        echo "dockerParameters: ${dockerParameters}"
                      }
                    }
                    stage('Test2') {
                      steps {
                        echo "resourcePrefix: ${resourcePrefix}"
                        echo "dockerParameters: ${dockerParameters}"
                      }
                    }
                  }
                }
            
            
                stage("Docker") {
                  options { skipDefaultCheckout true }
                  matrix {
                    axes {
                      axis {
                        name 'COMPONENT'
                        // Note: these values are the same as described in dockerParameters and params
                        values 'image1', 'image2'
                      }
                    }
                    stages {
                      stage("Build") {
                        when {
                          beforeAgent true
                          expression { params[COMPONENT] == true }
                        }
                        // agent { kubernetes(k8sAgent(name: 'dind')) }
                        steps {
                          // Failing on resourcePrefix/dockerParameters, as it doesn't have Field annotation
                          // Question is: why variables are not available inside matrix?
            
                          echo "resourcePrefix: ${resourcePrefix}"
                          echo "dockerParameters: ${dockerParameters}"
            
                          // Here is one step as example:
                          //dockerBuild(
                          //    image: dockerParameters.images[COMPONENT].image,
                          //    dockerfile: dockerParameters.images[COMPONENT].dockerfile
                          //)
                        }
                      }
                    }
                  }
                }
            
              }
            }
            
            

             

            The result is following:

            stage `Prepare` goes fine anyway - as expected.

            stage `Docker` fails (on each matrix stage) with the message:

            groovy.lang.MissingPropertyException: No such property: resourcePrefix for class: groovy.lang.Binding
            

            Until I do not add annotation: `@groovy.transform.Field`

            The same with `dockerParameters`, where I have map of different values, which are similar and have some common values.

            Note: this is just example, there is parameters, which we use in different stages, and copy-pasting all of them to each stage is not appropriate solution - defining them as common/global outside of `pipeline` block is the only way to do it, isn't it?

             

            Additional info: Version of plugin: 1.8.2 / Jenkins version: 2.235.3 / Any splitting params (described in PR #405) or experimental features was never enabled.

             

            Any ideas?

            moskovych Oleh Moskovych added a comment - - edited bitwiseman , ok, here is small example of my pipeline:   #!/usr/bin/env groovy //library( "jenkins_shared_library@1.0.0" ) //@groovy.transform.Field String resourcePrefix = new Date().getTime().toString() //@groovy.transform.Field Map dockerParameters = [ registry: "docker.example.com" , registryType: "internal" , images: [ image1: [image: "image1" , dockerfile: "Dockerfile1" ], image2: [image: "image2" , dockerfile: "Dockerfile2" ] ] ] pipeline { agent any options { skipDefaultCheckout true } parameters { booleanParam defaultValue: true , description: 'Build & Push image1' , name: 'image1' booleanParam defaultValue: true , description: 'Build & Push image2' , name: 'image2' } stages { stage( "Prepare" ) { options { skipDefaultCheckout true } failFast true parallel { stage( 'Test1' ) { steps { // All variables available in simple stages and parallel blocks echo "resourcePrefix: ${resourcePrefix}" echo "dockerParameters: ${dockerParameters}" } } stage( 'Test2' ) { steps { echo "resourcePrefix: ${resourcePrefix}" echo "dockerParameters: ${dockerParameters}" } } } } stage( "Docker" ) { options { skipDefaultCheckout true } matrix { axes { axis { name 'COMPONENT' // Note: these values are the same as described in dockerParameters and params values 'image1' , 'image2' } } stages { stage( "Build" ) { when { beforeAgent true expression { params[COMPONENT] == true } } // agent { kubernetes(k8sAgent(name: 'dind' )) } steps { // Failing on resourcePrefix/dockerParameters, as it doesn't have Field annotation // Question is: why variables are not available inside matrix? echo "resourcePrefix: ${resourcePrefix}" echo "dockerParameters: ${dockerParameters}" // Here is one step as example: //dockerBuild( // image: dockerParameters.images[COMPONENT].image, // dockerfile: dockerParameters.images[COMPONENT].dockerfile //) } } } } } } }   The result is following: stage `Prepare` goes fine anyway - as expected. stage `Docker` fails (on each matrix stage) with the message: groovy.lang.MissingPropertyException: No such property: resourcePrefix for class: groovy.lang.Binding Until I do not add annotation: `@groovy.transform.Field` The same with `dockerParameters`, where I have map of different values, which are similar and have some common values. Note: this is just example, there is parameters, which we use in different stages, and copy-pasting all of them to each stage is not appropriate solution - defining them as common/global outside of `pipeline` block is the only way to do it, isn't it?   Additional info: Version of plugin: 1.8.2 / Jenkins version: 2.235.3 / Any splitting params ( described in PR #405 ) or experimental features was never enabled.   Any ideas?
            sodul Stephane Odul added a comment - - edited

            We found a partial workaround for our pipelines that need to pass around parameters. We used to define a variable but somehow with `params` and `env` not available switching to a `get_params()` method so that that these values are available by then seems to do the trick.

            Restart from stage is also working as expected.

            sodul Stephane Odul added a comment - - edited We found a partial workaround for our pipelines that need to pass around parameters. We used to define a variable but somehow with `params` and `env` not available switching to a `get_params()` method so that that these values are available by then seems to do the trick. Restart from stage is also working as expected.
            bitwiseman Liam Newman added a comment - - edited

            sodul
            This is very useful data.
            Can you give an example of what the get_params() form looks like?

            bitwiseman Liam Newman added a comment - - edited sodul This is very useful data. Can you give an example of what the get_params() form looks like?
            sodul Stephane Odul added a comment - - edited
            def get_params() {
                return [
                    gitParameter(name: 'BRANCH', value: params.BRANCH),
                    string(name: 'FOO', value: env.FOO),
                    booleanParam(name: 'SKIP', value: params.SKIP)
                ]
            }
            
            pipeline {
                ...
                    build(job: 'other/pipeline', propagate: true, wait: true, parameters: get_params())
                ...
            }
            

            Some of our pipelines include a more complex get_build_params():

            def get_build_params(name) {
                return [job: name, propagate: true, wait: true, parameters: get_params())]
            }
            

            So the the build call can be as simple as build(get_build_params()) which greatly simplify our jenkinsfiles and reduces copy pasting, especially for some of our test automation pipelines that orchestrate calling many sub-pipelines. Since the various parameters are pipeline specific we do not really want to put it in the library as it would make it much larger than necessary, furthermore the parameters can be branch specific, which makes using a shared library less ideal.

            Initially we had `@field my_params = [...]`, but that was failing since `env` and `params` are now missing. We tried to move the variable definition to the first stage under a script block, but that would break `restart from stage` since values are not persisted. This alternative approach recreates the same data over and over, but that's pretty lightweight and seems to be fully backward/forward compatible.

            sodul Stephane Odul added a comment - - edited def get_params() { return [ gitParameter(name: 'BRANCH', value: params.BRANCH), string(name: 'FOO', value: env.FOO), booleanParam(name: 'SKIP', value: params.SKIP) ] } pipeline { ... build(job: 'other/pipeline', propagate: true, wait: true, parameters: get_params()) ... } Some of our pipelines include a more complex get_build_params() : def get_build_params(name) { return [job: name, propagate: true, wait: true, parameters: get_params())] } So the the build call can be as simple as build(get_build_params()) which greatly simplify our jenkinsfiles and reduces copy pasting, especially for some of our test automation pipelines that orchestrate calling many sub-pipelines. Since the various parameters are pipeline specific we do not really want to put it in the library as it would make it much larger than necessary, furthermore the parameters can be branch specific, which makes using a shared library less ideal. Initially we had `@field my_params = [...] `, but that was failing since `env` and `params` are now missing. We tried to move the variable definition to the first stage under a script block, but that would break `restart from stage` since values are not persisted. This alternative approach recreates the same data over and over, but that's pretty lightweight and seems to be fully backward/forward compatible.

            bitwiseman, I've created new bug as this ticket description doesn't follow with my case of issue:

            https://issues.jenkins.io/browse/JENKINS-64846

            Workaround with Field annotation still force users to fix theirs pipelines, which means - this is breaking changes.

            moskovych Oleh Moskovych added a comment - bitwiseman , I've created new bug as this ticket description doesn't follow with my case of issue: https://issues.jenkins.io/browse/JENKINS-64846 Workaround with Field annotation still force users to fix theirs pipelines, which means - this is breaking changes.
            tkleiber Torsten Kleiber added a comment - - edited

            After upgrading my staging environment from 2.277.3 to 2.277.4 and all of my plugins I get now again the error. On production environment the same pipeline works. Plugin pipeline-model-definition-plugin is v1.8.4 on both instances. The  JVM property is configured in JENKINS_JAVA_OPTIONS in the file /etc/sysconfig/jenkins of both instances. If I look at System Information I can see other entries from JENKINS_JAVA_OPTIONS like java.awt.headless in both environments, but org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION only in my production environment.

            If I run 

            org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true
            

            in script console the job runs till next restart via Jenkins itself, "systemctl restart jenkins.service" or via booting the server, after that it fails again.

            So at the moment I cannot upgrade my production environment anymore.

             

            tkleiber Torsten Kleiber added a comment - - edited After upgrading my staging environment from 2.277.3 to 2.277.4 and all of my plugins I get now again the error. On production environment the same pipeline works. Plugin pipeline-model-definition-plugin is v1.8.4 on both instances. The  JVM property is configured in JENKINS_JAVA_OPTIONS in the file /etc/sysconfig/jenkins of both instances. If I look at System Information I can see other entries from JENKINS_JAVA_OPTIONS like java.awt.headless in both environments, but org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION only in my production environment. If I run  org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION= true in script console the job runs till next restart via Jenkins itself, "systemctl restart jenkins.service" or via booting the server, after that it fails again. So at the moment I cannot upgrade my production environment anymore.  
            sodul Stephane Odul added a comment - - edited

            For reference we have upgraded to 2.277.4 a couple of weeks ago and everything works normally for us.
            We do have this set on the command line of the server:

            -Dorg.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true

            tkleiber With the monitoring plugin are you able to see the JVM arguments and confirm that you do have that CLI option passed properly?

            sodul Stephane Odul added a comment - - edited For reference we have upgraded to 2.277.4 a couple of weeks ago and everything works normally for us. We do have this set on the command line of the server: -Dorg.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true tkleiber With the monitoring plugin are you able to see the JVM arguments and confirm that you do have that CLI option passed properly?

            I don't need the monitoring plugin, as I normally can see the entry in "Manage Jenkins" -> "System Properties" and see it in production. If I set this on staging via "Manage Jenkins" -> "Script Console", I cannot see it in "System Properties" and it works only till next Jenkins restart.

            I saw the value "true" for entry "org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION" in "System Properties" before my upgrade on staging environment and see it on my production environment, which is not upgraded.

            It seems to me, that you start your Jenkins via command line, this is not the case here.

            We start Jenkins as a service via "systemctl start jenkins.service" on staging (OS SLES 12) and "service jenkins start" (OS SLES 11) on production. So setting "-Dorg.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true" in "JENKINS_JAVA_OPTIONS" in file "/etc/sysconfig/jenkins" seems the only option for me for our usecase and this has worked before in staging and works in production. Are there any other options to set this for starting jenkins as a service?

            tkleiber Torsten Kleiber added a comment - I don't need the monitoring plugin, as I normally can see the entry in "Manage Jenkins" -> "System Properties" and see it in production. If I set this on staging via "Manage Jenkins" -> "Script Console", I cannot see it in "System Properties" and it works only till next Jenkins restart. I saw the value "true" for entry "org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION" in "System Properties" before my upgrade on staging environment and see it on my production environment, which is not upgraded. It seems to me, that you start your Jenkins via command line, this is not the case here. We start Jenkins as a service via "systemctl start jenkins.service" on staging (OS SLES 12) and "service jenkins start" (OS SLES 11) on production. So setting "-Dorg.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true" in "JENKINS_JAVA_OPTIONS" in file "/etc/sysconfig/jenkins" seems the only option for me for our usecase and this has worked before in staging and works in production. Are there any other options to set this for starting jenkins as a service?

            bitwiseman just a heads up, the issue number referenced within the built-in Jenkins error message related to this issue has a typo:

            It should be this issue, JENKINS-37984 , instead of JENKINS-34987:

            General error during semantic analysis: [JENKINS-34987] SCRIPT_SPLITTING_TRANSFORMATION is an experimental feature of Declarative Pipeline and is incompatible with local variable declarations inside a Jenkinsfile. As a temporary workaround, you can add the '@Field' annotation to these local variable declarations. However, use of Groovy variables in Declarative pipeline, with or without the '@Field' annotation, is not recommended or supported. To use less effective script splitting which allows local variable declarations without changing your pipeline code, set SCRIPT_SPLITTING_ALLOW_LOCAL_VARIABLES=true . Local variable declarations found: [variable names].
            
            java.lang.IllegalStateException: [JENKINS-34987] SCRIPT_SPLITTING_TRANSFORMATION is an experimental feature of Declarative Pipeline and is incompatible with local variable declarations inside a Jenkinsfile. As a temporary workaround, you can add the '@Field' annotation to these local variable declarations. However, use of Groovy variables in Declarative pipeline, with or without the '@Field' annotation, is not recommended or supported. To use less effective script splitting which allows local variable declarations without changing your pipeline code, set SCRIPT_SPLITTING_ALLOW_LOCAL_VARIABLES=true . Local variable declarations found: [variable names]. 
            
            jmcclain Jeffrey McClain added a comment - bitwiseman  just a heads up, the issue number referenced within the built-in Jenkins error message related to this issue has a typo: It should be this issue,  JENKINS-37984  , instead of JENKINS-34987 : General error during semantic analysis: [JENKINS-34987] SCRIPT_SPLITTING_TRANSFORMATION is an experimental feature of Declarative Pipeline and is incompatible with local variable declarations inside a Jenkinsfile. As a temporary workaround, you can add the '@Field' annotation to these local variable declarations. However, use of Groovy variables in Declarative pipeline, with or without the '@Field' annotation, is not recommended or supported. To use less effective script splitting which allows local variable declarations without changing your pipeline code, set SCRIPT_SPLITTING_ALLOW_LOCAL_VARIABLES= true . Local variable declarations found: [variable names]. java.lang.IllegalStateException: [JENKINS-34987] SCRIPT_SPLITTING_TRANSFORMATION is an experimental feature of Declarative Pipeline and is incompatible with local variable declarations inside a Jenkinsfile. As a temporary workaround, you can add the '@Field' annotation to these local variable declarations. However, use of Groovy variables in Declarative pipeline, with or without the '@Field' annotation, is not recommended or supported. To use less effective script splitting which allows local variable declarations without changing your pipeline code, set SCRIPT_SPLITTING_ALLOW_LOCAL_VARIABLES= true . Local variable declarations found: [variable names].
            jglick Jesse Glick added a comment -

            jmcclain feel free to file a PR to fix that!

            jglick Jesse Glick added a comment - jmcclain feel free to file a PR to fix that!

            Workaround here in Jenkins LTS 2.289.1 and latests plugins does only work when activated via script console, not via JENKINS_JAVA_OPTIONS in /etc/sysconfig/jenkins. So it works again only until restart jenkins.

            tkleiber Torsten Kleiber added a comment - Workaround here in Jenkins LTS 2.289.1 and latests plugins does only work when activated via script console, not via JENKINS_JAVA_OPTIONS in /etc/sysconfig/jenkins. So it works again only until restart jenkins.
            sodul Stephane Odul added a comment -

            tkleiber We have not upgraded to LTS 2.289.1 yet so cannot confirm, but it seems your /etc/sysconfig/jenkins is not being applied when your Jenkins instance is launched. You need to check that the java process has the -D option passed to its command line. You can check that with the monitoring plugin

            Or if you get shell. access tot he server run ps auxwww

            sodul Stephane Odul added a comment - tkleiber We have not upgraded to LTS 2.289.1 yet so cannot confirm, but it seems your /etc/sysconfig/jenkins is not being applied when your Jenkins instance is launched. You need to check that the java process has the -D option passed to its command line. You can check that with the monitoring plugin Or if you get shell. access tot he server run ps auxwww

            Yes - you are right!

            Because the staging server was upgraded from SLES 11 to 12 too, the service definition has changed from service to systemctl.

            From Installing Jenkins as a Unix daemon - Jenkins - Jenkins Wiki the production server use the "Java Service Wrapper" configuration, which use /etc/sysconfig/jenkins.

            The staging server now use the "OpenSuse" "Linux service - systemd" configuration from this link, which does not use /etc/sysconfig/jenkins anymore.

            Have added now the JENKINS_JAVA_OPTIONS from /etc/sysconfig/jenkins to ExecStart parameter in /usr/lib/systemd/system/jenkins.service directly and all works again!

            Thanks!

            tkleiber Torsten Kleiber added a comment - Yes - you are right! Because the staging server was upgraded from SLES 11 to 12 too, the service definition has changed from service to systemctl. From  Installing Jenkins as a Unix daemon - Jenkins - Jenkins Wiki the production server use the "Java Service Wrapper" configuration, which use /etc/sysconfig/jenkins. The staging server now use the "OpenSuse" "Linux service - systemd" configuration from this link, which does not use /etc/sysconfig/jenkins anymore. Have added now the JENKINS_JAVA_OPTIONS from /etc/sysconfig/jenkins to ExecStart parameter in /usr/lib/systemd/system/jenkins.service directly and all works again! Thanks!
            bitwiseman Liam Newman added a comment - - edited

            tkleiber
            I'm glad you were able to figure out the problem.

            tkleibersodul moskovych jmcclain
            How is the feature behaving for you? Do you have any feedback, comments, observations? I'm trying to evaluate it's readiness for wider use.

            bitwiseman Liam Newman added a comment - - edited tkleiber I'm glad you were able to figure out the problem. tkleiber sodul moskovych jmcclain How is the feature behaving for you? Do you have any feedback, comments, observations? I'm trying to evaluate it's readiness for wider use.
            jmcclain Jeffrey McClain added a comment - - edited

            How is the feature behaving for you? Do you have any feedback, comments, observations?

            bitwiseman For reference, initially one of my larger pipelines stopped working, so I tried the 

            org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true

            workaround, however it just resulted in a different message about needing to set 

            SCRIPT_SPLITTING_ALLOW_LOCAL_VARIABLES=true

            in order to use variables defined outside of my pipeline, and even then I still needed to add "import groovy.transform.Field" and "@Field" declarations to my variables and the "env." prefix seemed to stop being recognized by Jenkins for defining environment variables within my pipeline, etc.

            Eventually I just moved some of my pipeline stages to a downstream helper job to get the overall pipeline working again, which I'm guessing is the recommended approach anyways rather than manually changing the experimental settings for SCRIPT_SPLITTING_TRANSFORMATION and SCRIPT_SPLITTING_ALLOW_LOCAL_VARIABLES to true.

            I'd say it definitely seems to be a bit of a breaking change, but if you think the optimization is worth it then I don't really mind. I feel like the error message could be a bit more intuitive though, maybe something like:

            "Your declarative pipeline code is [x]kb which exceeds Java's maximum bytecode size of 64kb and therefore can't be parsed by Jenkins. Consider moving some stages to downstream pipelines or splitting your pipeline into multiple smaller pipelines to reduce your code size to satisfy Java's 64kb limit. Alternately, set org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true as a workaround. See Jenkins-37984 for more details."

            jmcclain Jeffrey McClain added a comment - - edited How is the feature behaving for you? Do you have any feedback, comments, observations ? bitwiseman  For reference, initially one of my larger pipelines stopped working, so I tried the  org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION= true workaround, however it just resulted in a different message about needing to set  SCRIPT_SPLITTING_ALLOW_LOCAL_VARIABLES= true in order to use variables defined outside of my pipeline, and even then I still needed to add "import groovy.transform.Field" and "@Field" declarations to my variables and the "env." prefix seemed to stop being recognized by Jenkins for defining environment variables within my pipeline, etc. Eventually I just moved some of my pipeline stages to a downstream helper job to get the overall pipeline working again, which I'm guessing is the recommended approach anyways rather than manually changing the experimental settings for SCRIPT_SPLITTING_TRANSFORMATION and SCRIPT_SPLITTING_ALLOW_LOCAL_VARIABLES to true. I'd say it definitely seems to be a bit of a breaking change, but if you think the optimization is worth it then I don't really mind. I feel like the error message could be a bit more intuitive though, maybe something like: "Your declarative pipeline code is  [x] kb which exceeds Java's maximum bytecode size of 64kb and therefore can't be parsed by Jenkins. Consider moving some stages to downstream pipelines or splitting your pipeline into multiple smaller pipelines to reduce your code size to satisfy Java's 64kb limit. Alternately, set org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true as a workaround. See Jenkins-37984 for more details."

            bitwiseman: How is the feature behaving for you? Do you have any feedback, comments, observations?

            With this feature our main declarative multi branch pipeline works only with the SCRIPT_SPLITTING_TRANSFORMATION feature, without it we would have to go to back to classic up-/down-stream approach. We don't use variables outside of the pipeline at the moment. All other pipelines are small enough.

            We use here trunk based development in a monorepo for our main loan application with different backend and frontend technologies. And not all are implemented at till now.

            Despite we try to move a lot of logic to pipeline libraries there remains a lot of stages because of when conditions depending on branching model and repository names (eg. for testing jenkins staging). Furtermore we need different pipeline stages for environments like development, test and production or for different controllers for building on different operating systems.

            One thing we miss at the moment is better parallel support as other systems like UC4 have. Eg. parallel in parallel and the corresponding visualization in blue ocean.

            tkleiber Torsten Kleiber added a comment - >  bitwiseman : How is the feature behaving for you? Do you have any feedback, comments, observations? With this feature our main declarative multi branch pipeline works only with the SCRIPT_SPLITTING_TRANSFORMATION feature, without it we would have to go to back to classic up-/down-stream approach. We don't use variables outside of the pipeline at the moment. All other pipelines are small enough. We use here trunk based development in a monorepo for our main loan application with different backend and frontend technologies. And not all are implemented at till now. Despite we try to move a lot of logic to pipeline libraries there remains a lot of stages because of when conditions depending on branching model and repository names (eg. for testing jenkins staging). Furtermore we need different pipeline stages for environments like development, test and production or for different controllers for building on different operating systems. One thing we miss at the moment is better parallel support as other systems like UC4 have. Eg. parallel in parallel and the corresponding visualization in blue ocean.

            > bitwiseman: How is the feature behaving for you? Do you have any feedback, comments, observations?

            We are not using the SCRIPT_SPLITTING_TRANSFORMATION set (by default it false, right?).

            Our pipelines mostly use methods/functions from Jenkins Shared library and
            all pipelines contains some global variables before pipeline block (variables with some groovy logic, which used in more than 2 stages, or should be defined as global).
            The example of pipeline you may take from this issue description: JENKINS-64846

            Pipelines are separated from functions, so - no pipeline blocks in call functions for shared library, like it was shown here: JENKINS-64846?focusedCommentId=407258

            bitwiseman, I know, this is beta, but is there any documentation available for description of flags and behavior of pipelines? It would be good to have examples without diving in the plugin source code. Especially with our approach of using groovy outside pipeline block.

            moskovych Oleh Moskovych added a comment - > bitwiseman : How is the feature behaving for you? Do you have any feedback, comments, observations? We are not using the SCRIPT_SPLITTING_TRANSFORMATION set (by default it false, right?). Our pipelines mostly use methods/functions from Jenkins Shared library and all pipelines contains some global variables before pipeline block (variables with some groovy logic, which used in more than 2 stages, or should be defined as global). The example of pipeline you may take from this issue description: JENKINS-64846 Pipelines are separated from functions, so - no pipeline blocks in call functions for shared library, like it was shown here: JENKINS-64846?focusedCommentId=407258 bitwiseman , I know, this is beta, but is there any documentation available for description of flags and behavior of pipelines? It would be good to have examples without diving in the plugin source code. Especially with our approach of using groovy outside pipeline block.

            People

              Unassigned Unassigned
              anudeeplalam Anudeep Lalam
              Votes:
              79 Vote for this issue
              Watchers:
              94 Start watching this issue

              Dates

                Created:
                Updated: