Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-37984

org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed: General error during class generation: Method code too large! error in pipeline Script

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Blocker Blocker
    • workflow-cps-plugin
    • None

      Note from the Maintainers

      There is partial fix for this for Declarative pipelines in pipeline-model-definition-plugin v1.4.0 and later, significantly improved in v1.8.4.  Due to the extent to which it change how pipelines are executed it is turned off by default.  It can be turned on by setting a JVM property (either on the command-line or in Jenkins script console):

      org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true 

      As noted, this still works best with a Jenkinsfile with pipeline directive as the only root item in the file.
      Since v1.8.2 this workaround reports an informative error for pipelines using `def` variables before the pipeline directive. Add a @Field annotation to those declaration.
      This workaround generally does NOT work if the pipeline directive inside a shared library method. If this is a scenario you want, please come join the pipeline authoring SIG and we can discuss.

      Please give it a try and provide feedback. 

      Hi,

      We are getting below error in Pipeline which has some 495 lines of groovy code. Initially we assumed that one of our methods has an issue but once we remove any 30-40 lines of Pipeline groovy, this issue is not coming.

      Can you please suggest a quick workaround. It's a blocker for us.

      org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
      General error during class generation: Method code too large!
      
      java.lang.RuntimeException: Method code too large!
      	at groovyjarjarasm.asm.MethodWriter.a(Unknown Source)
      	at groovyjarjarasm.asm.ClassWriter.toByteArray(Unknown Source)
      	at org.codehaus.groovy.control.CompilationUnit$16.call(CompilationUnit.java:815)
      	at org.codehaus.groovy.control.CompilationUnit.applyToPrimaryClassNodes(CompilationUnit.java:1053)
      	at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:591)
      	at org.codehaus.groovy.control.CompilationUnit.processPhaseOperations(CompilationUnit.java:569)
      	at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:546)
      	at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:298)
      	at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:268)
      	at groovy.lang.GroovyShell.parseClass(GroovyShell.java:688)
      	at groovy.lang.GroovyShell.parse(GroovyShell.java:700)
      	at org.jenkinsci.plugins.workflow.cps.CpsGroovyShell.reparse(CpsGroovyShell.java:67)
      	at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.parseScript(CpsFlowExecution.java:410)
      	at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.start(CpsFlowExecution.java:373)
      	at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:213)
      	at hudson.model.ResourceController.execute(ResourceController.java:98)
      	at hudson.model.Executor.run(Executor.java:410)
      
      1 error
      
      	at org.codehaus.groovy.control.ErrorCollector.failIfErrors(ErrorCollector.java:310)
      	at org.codehaus.groovy.control.CompilationUnit.applyToPrimaryClassNodes(CompilationUnit.java:1073)
      	at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:591)
      	at org.codehaus.groovy.control.CompilationUnit.processPhaseOperations(CompilationUnit.java:569)
      	at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:546)
      	at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:298)
      	at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:268)
      	at groovy.lang.GroovyShell.parseClass(GroovyShell.java:688)
      	at groovy.lang.GroovyShell.parse(GroovyShell.java:700)
      	at org.jenkinsci.plugins.workflow.cps.CpsGroovyShell.reparse(CpsGroovyShell.java:67)
      	at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.parseScript(CpsFlowExecution.java:410)
      	at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.start(CpsFlowExecution.java:373)
      	at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:213)
      	at hudson.model.ResourceController.execute(ResourceController.java:98)
      	at hudson.model.Executor.run(Executor.java:410)
      Finished: FAILURE
      

        1. JenkinsCodeTooLarge.groovy
          45 kB
        2. Script_Splitting.groovy
          44 kB
        3. Script_Splittingx10.groovy
          519 kB
        4. errorIncomaptiblewithlocalvar.txt
          8 kB
        5. java.png
          java.png
          294 kB

          [JENKINS-37984] org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed: General error during class generation: Method code too large! error in pipeline Script

          Liam Newman added a comment -

          brianjmurrell

          The latter is just a band-aid, postponing of the inevitable and frankly a wasted investment if you are going to have to end up scrapping the whole thing at some point and moving to an entirely new solution that won't have such inevitable fatal limits.

          I'm not understanding your statement here. There is no way to make there not be some point at which this limit is hit - it is part of the Java class file binary format. You can hit this while writing any Java program. You don't hit it because of the structure of Java encourages code practices that make it unlikely.

          The Script_Splitting.groovy shows that script splitting addresses this issue for Declarative Pipelines that don't use variables (which is best practice). It is effectively the same as JenkinsCodeTooLarge.groovy but without the variable declaration. Is there still a point at which you may hit the size limit? Yes, however, it is over 1000 stages (that's where I stopped), and even higher for matrix-generated stages. At which point hitting the issue isn't "inevitable" but rather highly unlikely.

          How big of a pipeline are you trying to run?

          If what you mean to say is "Well, I use variables so this doesn't help me", I understand your frustration. If you have bandwidth do contribute a solution, I'd love to chat with you about it.

          Liam Newman added a comment - brianjmurrell The latter is just a band-aid, postponing of the inevitable and frankly a wasted investment if you are going to have to end up scrapping the whole thing at some point and moving to an entirely new solution that won't have such inevitable fatal limits. I'm not understanding your statement here. There is no way to make there not be some point at which this limit is hit - it is part of the Java class file binary format. You can hit this while writing any Java program. You don't hit it because of the structure of Java encourages code practices that make it unlikely. The Script_Splitting.groovy shows that script splitting addresses this issue for Declarative Pipelines that don't use variables (which is best practice). It is effectively the same as JenkinsCodeTooLarge.groovy but without the variable declaration. Is there still a point at which you may hit the size limit? Yes, however, it is over 1000 stages (that's where I stopped), and even higher for matrix-generated stages. At which point hitting the issue isn't "inevitable" but rather highly unlikely. How big of a pipeline are you trying to run? If what you mean to say is "Well, I use variables so this doesn't help me", I understand your frustration. If you have bandwidth do contribute a solution, I'd love to chat with you about it.

          I will investigate if/how this helps the next time we hit the limit.

          Brian J Murrell added a comment - I will investigate if/how this helps the next time we hit the limit.

          M McGrath added a comment -

          Hi bitwiseman
          I have been able to take your sample pipelines (Script_Splitting.groovy and Script_Splittingx10.groovy)
          I can reproduce the issue “Method Code too large”
          When I enable "SCRIPT_SPLITTING_TRANSFORMATION=true" the 2 pipelines you provided run successfully.
           
          When I add Script_Splitting.groovy to a shared Lib under /var, add shared library under jenkins config sys,
          Create a mock app with Jenkinsfile which consumes pipeline, and setup a Multibranch job I reproduce  “Method Code too large”
           
           

          M McGrath added a comment - Hi bitwiseman I have been able to take your sample pipelines (Script_Splitting.groovy and Script_Splittingx10.groovy) I can reproduce the issue “Method Code too large” When I enable "SCRIPT_SPLITTING_TRANSFORMATION=true" the 2 pipelines you provided run successfully.   When I add Script_Splitting.groovy to a shared Lib under /var, add shared library under jenkins config sys, Create a mock app with Jenkinsfile which consumes pipeline, and setup a Multibranch job I reproduce  “Method Code too large”    

          Richard Olsson added a comment - - edited

          Hi,

          My config & setup:

          • Jenkins ver. 2.190.3
          • Declarative pipelines
          • Pipeline jobs with groovy pipeline script of 591 lines and 39 jobs to build are failing with "General error during class generation: Method code too large!" (Files with 396 lines are fine)

           

          I've a Job DSL logic in place reading configuration files to create "pipeline code" (+ jobs and so on...) that are stored in variables in the Job DSL groovy scripts. And they are then used when creating the Jenkins pipeline jobs.

          So, the pipeline code are created "on the fly" with Job DSL groovy scripts.
          In the repo I do have a pipeline code TEMPLATE file. So, read that into the Job DSL groovy code and then do some edit/replacing and store the final 
          pipeline code in an internal variable. So, the pipeline code are only stored in groovy variables, not in any file on disk. So, they cannot be handled as static files. 

          This infrastructure works very well in other pipeline setups with less (less stages...) number of Jenkins jobs and pipeline code lines. This issue came as a surprise for me when creating this new setup with bigger pipelines. :-|

          What to do?

          I've seen references to https://jenkins.io/doc/book/pipeline/shared-libraries/#defining-declarative-pipelines.

          But I doubt that I can update any code in files in the shared libraries from Job DSL groovy code...?  (Comparing to what's done today - storing final pipeline code in groovy variable)

          Anyone, any suggestion on the way forward? Or is the only way to split into more pipeline jobs and pipeline code files?

          I don't want to make any big changes to the Job DSL logic that's already in place and works fine for today's smaller pipeline setups!

           

          Richard Olsson added a comment - - edited Hi, My config & setup: Jenkins ver. 2.190.3 Declarative pipelines Pipeline jobs with groovy pipeline script of 591 lines and 39 jobs to build are failing with "General error during class generation: Method code too large!" (Files with 396 lines are fine)   I've a Job DSL logic in place reading configuration files to create "pipeline code" (+ jobs and so on...) that are stored in variables in the Job DSL groovy scripts. And they are then used when creating the Jenkins pipeline jobs. So, the pipeline code are created "on the fly" with Job DSL groovy scripts. In the repo I do have a pipeline code TEMPLATE file. So, read that into the Job DSL groovy code and then do some edit/replacing and store the final  pipeline code in an internal variable. So, the pipeline code are only stored in groovy variables, not in any file on disk. So, they cannot be handled as static files.  This infrastructure works very well in other pipeline setups with less (less stages...) number of Jenkins jobs and pipeline code lines. This issue came as a surprise for me when creating this new setup with bigger pipelines. :-| What to do? I've seen references to  https://jenkins.io/doc/book/pipeline/shared-libraries/#defining-declarative-pipelines . But I doubt that I can update any code in files in the shared libraries from Job DSL groovy code...?  (Comparing to what's done today - storing final pipeline code in groovy variable) Anyone, any suggestion on the way forward? Or is the only way to split into more pipeline jobs and pipeline code files? I don't want to make any big changes to the Job DSL logic that's already in place and works fine for today's smaller pipeline setups!  

          Alan Champion added a comment - - edited

          Just adding weight in the hope that this may be addressed sooner rather than later.

          My objective has been to breakdown the legacy Jenkins jobs to run various steps as parallel stages for efficiency, with minimal change to the core scripts (which I have parameterised to accommodate either serial/parallel execution)

          • DSL pipeline held in SCM (Git) consists of 632 lines
          • builds five legacy nested Jenkins jobs as "explicitly numbered" Primary Stages:
            1. Prepare Environment called 7 times (8 stages: 1 serial + 5 parallel + 1 serial to merge report)
            2. Generate Tests (based on historical samples) with a conditional alternative stage to accommodate a re-run that required recycling processed data (i.e. 2 stages)
            3. Verify Clean Environment (optional) in parallel with stage 2 (5 stages: 1 serial + 2 parallel + 1 serial to merge report)
            4. Exec Generated Tests (3 stages: 2 parallel same tests on two baselines)
            5. Compare Results (7 stages: 1 serial + 4 parallel + 1 serial to merge report)

          Currently, this amounts to 25 stages including the five overhead stages to handle the parallelisation.  I have evolved to this state gradually, and only after I parallelised the 1st stage, did the "too big" problem appear.  Also, I had expected to improve performance and visibility further by splitting Primary Stage 4 into 10+ parallel stages. 

          I am thinking that the best way forward may involve breaking the jobs into three levels (instead of two) by promoting the five Primary Stages as nested pipelines.

          I accept that this regression testing exercise may not be the norm for most but any advice/help would be appreciated on a pragmatic way forward.

          Thanks, Alan

           

          Alan Champion added a comment - - edited Just adding weight in the hope that this may be addressed sooner rather than later. My objective has been to breakdown the legacy Jenkins jobs to run various steps as parallel stages for efficiency, with minimal change to the core scripts (which I have parameterised to accommodate either serial/parallel execution) DSL pipeline held in SCM (Git) consists of 632 lines builds five legacy nested Jenkins jobs as "explicitly numbered" Primary Stages: Prepare Environment called 7 times (8 stages: 1 serial + 5 parallel + 1 serial to merge report) Generate Tests (based on historical samples) with a conditional alternative stage to accommodate a re-run that required recycling processed data (i.e. 2 stages) Verify Clean Environment  (optional) in parallel with stage 2 (5 stages: 1 serial + 2 parallel + 1 serial to merge report) Exec Generated Tests (3 stages: 2 parallel same tests on two baselines) Compare Results (7 stages: 1 serial + 4 parallel + 1 serial to merge report) Currently, this amounts to 25 stages including the five overhead stages to handle the parallelisation.  I have evolved to this state gradually, and only after I parallelised the 1st stage, did the "too big" problem appear.  Also, I had expected to improve performance and visibility further by splitting Primary Stage  4 into 10+ parallel stages.  I am thinking that the best way forward may involve breaking the jobs into three levels (instead of two) by promoting the five Primary Stages as nested pipelines. I accept that this regression testing exercise may not be the norm for most but any advice/help would be appreciated on a pragmatic way forward. Thanks, Alan  

          Greg Turner added a comment -

          I've been rewriting a scripted pipeline to declarative to reduce the complexity and readability but have also run into this same issue with what I'd consider "a typical use-case" of Jenkins.  I'm trying to reduce the size but still up against the limit. 

          I understand this is probably not an easy fix but some assurance that this will be fixed in a future release would be helpful.

          Greg Turner added a comment - I've been rewriting a scripted pipeline to declarative to reduce the complexity and readability but have also run into this same issue with what I'd consider "a typical use-case" of Jenkins.  I'm trying to reduce the size but still up against the limit.  I understand this is probably not an easy fix but some assurance that this will be fixed in a future release would be helpful.

          Steven Gardell added a comment - - edited

          This same behavior is seen with scripted pipelines and it can be worked around - with increased pain as the functional complexity of a pipeline grows.  Apparently this is due to a core jvm limitation of 64K methods on compiled byte-code. Which is unfortunate in a code-generation world. Rather than spending a ton of time working around this it would be really nice just to make the limit 10 times as big...

          It would also be helpful to have a little more insight into the contributors to this. For example jenkins scripts, whether declarative or pipeline, often have substantial blocks of text directly scripting the node (e.g. bash or whatever).  Does the size of such scripting reflect directly on groovy/java code size or are each of these treated as an opaque data blob who's size doesn't really matter? Is there some logging that enables me to see the current 'method size?'

          One does have to wonder. Is groovy really the proper vehicle for defining pipelines then?

          Steven Gardell added a comment - - edited This same behavior is seen with scripted pipelines and it can be worked around - with increased pain as the functional complexity of a pipeline grows.  Apparently this is due to a core jvm limitation of 64K methods on compiled byte-code. Which is unfortunate in a code-generation world. Rather than spending a ton of time working around this it would be really nice just to make the limit 10 times as big... It would also be helpful to have a little more insight into the contributors to this. For example jenkins scripts, whether declarative or pipeline, often have substantial blocks of text directly scripting the node (e.g. bash or whatever).  Does the size of such scripting reflect directly on groovy/java code size or are each of these treated as an opaque data blob who's size doesn't really matter? Is there some logging that enables me to see the current 'method size?' One does have to wonder. Is groovy really the proper vehicle for defining pipelines then?

          I just ran into this migrating from one orchestration multijob + multiple freestyle jobs to one pipeline (declarative plus matrix) on Jenkins 2.226.  I have multiple stages ( build / test / deploy ) with matrices inside them ( build on x y, test on x y z, deploy on x y ). My Jenkinsfile is 587 lines.

          Stefan Drissen added a comment - I just ran into this migrating from one orchestration multijob + multiple freestyle jobs to one pipeline (declarative plus  matrix ) on Jenkins 2.226.  I have multiple stages ( build / test / deploy ) with matrices inside them ( build on x y, test on x y z, deploy on x y ). My Jenkinsfile is 587 lines.

          Antonio Muñiz added a comment - - edited

          FTR: I'm hitting this with a "not-so-big" (and no matrix) pipeline, ~800 lines, it includes a few separated stages:

          • Build on Linux (and unit tests)
          • Build on Windows (and unit tests)
          • QA (spotbugs, checkstyle, etc)
          • Security analysis
          • Integration tests
          • Release

          Antonio Muñiz added a comment - - edited FTR: I'm hitting this with a "not-so-big" (and no matrix) pipeline, ~800 lines, it includes a few separated stages: Build on Linux (and unit tests) Build on Windows (and unit tests) QA (spotbugs, checkstyle, etc) Security analysis Integration tests Release

          Jesse Glick added a comment -

          amuniz Scripted? (or “Declarative” with script blocks?) It is unknown whether a general fix is feasible for Scripted. Would likely require a redesign of the CPS transformer, which is possible in principle but this is one of the most difficult areas of Jenkins to edit.

          Jesse Glick added a comment - amuniz Scripted? (or “Declarative” with script blocks?) It is unknown whether a general fix is feasible for Scripted. Would likely require a redesign of the CPS transformer, which is possible in principle but this is one of the most difficult areas of Jenkins to edit.

          Dee Kryvenko added a comment -

          I've said this before in this thread, but as I keep getting notifications about new comments in this issue from people who refuse to admin their pipelines design sucks - I have prepared this detailed walkthrough.

          Using this technique, I've been able to run 100 stages in both scripted and declarative mode before I'm hitting this issue. I didn't tried the workaround by bitwiseman which might improve Declarative one even further. I want to emphasize on the fact if you are having even half of that many stages - you are doing CICD wrong. You need to fix your process. Jenkins just happens to be a first bottleneck you've faced down that path. That discussion can get really philosophical as we will need to properly redefine what's CI and what's CD, what is a Pipeline and why Jenkins is not a Cron with web-interface. I really have no desire to be doing it here.

          Exception might be matrix jobs, but, even then I'm not so sure though I admit there might be a valid use case with that many stages in that space. But even then - execute your scripted pipeline in chunks (details below) - and there is no limits at all, I've been able to run a pipeline with 10000 stages! Thought then my Jenkins fails to render that many stages in UI. But more on that later.

          Now, getting into the right way of doing Jenkins.

          First and foremost - your Jenkinsfile, no matter where it stored, must be small and simple. It doesn't have to tell WHAT to do, nor define any stages. All that is implementation details that you want to hide from your users.

          An example of such a Jenkinsfile:

          library 'method-code-too-large-demo'
          
          loopStagesScripted(100)
          

          Note it doesn't matter if you're going to use scripted or declarative pipelines at this point. Here, you just collecting a user input. In my example I have just one - a number that defines how many stages I want in my pipeline. In real world example it might be any input you need from the user - type of project, platform version, any package/dependency details, etc. Just collect that input in any form and shape you want and pass it to your library. In my example a demo library lives here https://github.com/llibicpep/method-code-too-large-demo and loopStagesScripted is a step I have defined in it.

          Now, it is up to the library to read the user input, do whatever calculations, and generate your pipeline on the fly and then execute it. But the trick is - the pipeline is just a skeleton, defines the stages and does not actually performs any steps. For the steps it will fall back to the library again. Resulting pipeline from that Jenkinsfile will be looking like this:

          stage('Stage 1') {
              podTemplate(yaml: getPod(1)) {
                  node(POD_LABEL) {
                      doSomethingBasedOnStageNameOrWhatever(1)
                  }
              }
          }
          
          stage('Stage 2') {
              podTemplate(yaml: getPod(2)) {
                  node(POD_LABEL) {
                      doSomethingBasedOnStageNameOrWhatever(2)
                  }
              }
          }
          
          ...
          

          Note in my example, intentionally to increase complexity of my pipeline to demonstrate that everything is possible, I am using Kubernetes plugin and I fallback to the library for my Pod definition calculation based on the user input too. So, my pipeline body doesn't really have much. Once the library generated the pipeline string (and you can be as creative as you want with the ways you go about user input and templating - I had some examples in this issue previously) - it uses the step evaluate to execute it. The actual steps lives in the library under doSomethingBasedOnStageNameOrWhatever, both step name and it's input may be coming from templating layer to do actual something.

          I wanted to emphasize on the fact that I didn't do my pipelines that way to work around this particular issue. Proper abstraction layers for stages (interfaces) and steps (implementation) just helps me keep my pretty complex CICD code in a good shape and order. It's readable, easy to understand, and also easily testable (both unit and integration testing).

          Like I said, I've been able to run 100 stages that way before it fails. Even if you really need more, which I doubt, you can execute that pipeline in chunks - for instance, each stage separately. There is no limit if you do it that way, I've run 10000 stages like that and didn't face Method code too large issue (though I did faced other issues like my Jenkins fails to render that many stages in Web). An example Jenkinsfile:

          library 'method-code-too-large-demo'
          
          loopStagesScriptedInChunks(10000)
          

          If you look into the library code, you'll see all it does is just call evaluate for each stage separately. There is a downside for this approach - Jenkins will not know all the stages in your pipeline ahead of time, so in UI stages will be popping up as they gets executed.

          Now, Declarative pipeline:

          library 'method-code-too-large-demo'
          
          loopStagesDeclarative(250)
          

          It will use the same technique as loopStagesScripted, except that the body of the pipeline generated will be Declarative style. It will get executed same way via evaluate, and will result into something like:

          pipeline {
            agent none
            stages {
          
              stage('Stage 1') {
                  agent {
                      kubernetes {
                          yaml getPod(1)
                      }
                  }
                  steps {
                      doSomethingBasedOnStageNameOrWhatever(1)
                  }
              }
          
              stage('Stage 2') {
                  agent {
                      kubernetes {
                          yaml getPod(2)
                      }
                  }
                  steps {
                      doSomethingBasedOnStageNameOrWhatever(2)
                  }
              }
          
          ...
          
            }
          }
          

          I hope whoever really wanted a solution - get it now. And whoever wants Jenkins to accommodate for their failures and maintain artificial and invalid use case - I'm really sorry for you.

          Dee Kryvenko added a comment - I've said this before in this thread, but as I keep getting notifications about new comments in this issue from people who refuse to admin their pipelines design sucks - I have prepared this detailed walkthrough. Using this technique, I've been able to run 100 stages in both scripted and declarative mode before I'm hitting this issue. I didn't tried the workaround by bitwiseman which might improve Declarative one even further. I want to emphasize on the fact if you are having even half of that many stages - you are doing CICD wrong. You need to fix your process. Jenkins just happens to be a first bottleneck you've faced down that path. That discussion can get really philosophical as we will need to properly redefine what's CI and what's CD, what is a Pipeline and why Jenkins is not a Cron with web-interface. I really have no desire to be doing it here. Exception might be matrix jobs, but, even then I'm not so sure though I admit there might be a valid use case with that many stages in that space. But even then - execute your scripted pipeline in chunks (details below) - and there is no limits at all, I've been able to run a pipeline with 10000 stages! Thought then my Jenkins fails to render that many stages in UI. But more on that later. Now, getting into the right way of doing Jenkins. First and foremost - your Jenkinsfile, no matter where it stored, must be small and simple. It doesn't have to tell WHAT to do, nor define any stages. All that is implementation details that you want to hide from your users. An example of such a Jenkinsfile: library 'method-code-too-large-demo' loopStagesScripted(100) Note it doesn't matter if you're going to use scripted or declarative pipelines at this point. Here, you just collecting a user input. In my example I have just one - a number that defines how many stages I want in my pipeline. In real world example it might be any input you need from the user - type of project, platform version, any package/dependency details, etc. Just collect that input in any form and shape you want and pass it to your library. In my example a demo library lives here https://github.com/llibicpep/method-code-too-large-demo and loopStagesScripted is a step I have defined in it. Now, it is up to the library to read the user input, do whatever calculations, and generate your pipeline on the fly and then execute it. But the trick is - the pipeline is just a skeleton, defines the stages and does not actually performs any steps. For the steps it will fall back to the library again. Resulting pipeline from that Jenkinsfile will be looking like this: stage( 'Stage 1' ) { podTemplate(yaml: getPod(1)) { node(POD_LABEL) { doSomethingBasedOnStageNameOrWhatever(1) } } } stage( 'Stage 2' ) { podTemplate(yaml: getPod(2)) { node(POD_LABEL) { doSomethingBasedOnStageNameOrWhatever(2) } } } ... Note in my example, intentionally to increase complexity of my pipeline to demonstrate that everything is possible, I am using Kubernetes plugin and I fallback to the library for my Pod definition calculation based on the user input too. So, my pipeline body doesn't really have much. Once the library generated the pipeline string (and you can be as creative as you want with the ways you go about user input and templating - I had some examples in this issue previously) - it uses the step evaluate to execute it. The actual steps lives in the library under doSomethingBasedOnStageNameOrWhatever , both step name and it's input may be coming from templating layer to do actual something . I wanted to emphasize on the fact that I didn't do my pipelines that way to work around this particular issue. Proper abstraction layers for stages (interfaces) and steps (implementation) just helps me keep my pretty complex CICD code in a good shape and order. It's readable, easy to understand, and also easily testable (both unit and integration testing). Like I said, I've been able to run 100 stages that way before it fails. Even if you really need more, which I doubt, you can execute that pipeline in chunks - for instance, each stage separately. There is no limit if you do it that way, I've run 10000 stages like that and didn't face Method code too large issue (though I did faced other issues like my Jenkins fails to render that many stages in Web). An example Jenkinsfile: library 'method-code-too-large-demo' loopStagesScriptedInChunks(10000) If you look into the library code, you'll see all it does is just call evaluate for each stage separately. There is a downside for this approach - Jenkins will not know all the stages in your pipeline ahead of time, so in UI stages will be popping up as they gets executed. Now, Declarative pipeline: library 'method-code-too-large-demo' loopStagesDeclarative(250) It will use the same technique as loopStagesScripted , except that the body of the pipeline generated will be Declarative style. It will get executed same way via evaluate , and will result into something like: pipeline { agent none stages { stage( 'Stage 1' ) { agent { kubernetes { yaml getPod(1) } } steps { doSomethingBasedOnStageNameOrWhatever(1) } } stage( 'Stage 2' ) { agent { kubernetes { yaml getPod(2) } } steps { doSomethingBasedOnStageNameOrWhatever(2) } } ... } } I hope whoever really wanted a solution - get it now. And whoever wants Jenkins to accommodate for their failures and maintain artificial and invalid use case - I'm really sorry for you.

          John Malmberg added a comment -

          Matrix builds are currently not viable if any stage in the Matrix is skipped with a when clause.

          The job execution logic appears to work correctly, but the WEB UI is totally useless for both traditional and Blue Ocean views.

          So we can not use Matrix builds as an alternative to this until https://issues.jenkins-ci.org/browse/JENKINS-62034 is fixed.

          John Malmberg added a comment - Matrix builds are currently not viable if any stage in the Matrix is skipped with a when clause. The job execution logic appears to work correctly, but the WEB UI is totally useless for both traditional and Blue Ocean views. So we can not use Matrix builds as an alternative to this until https://issues.jenkins-ci.org/browse/JENKINS-62034 is fixed.

          Brian J Murrell added a comment - - edited

          llibicpep Your explanation and example comment of how the rest of us are all doing CI/CD wrong seems to assume everyone is running very simple and identical stages as your loopStagesDeclarative.groovy example demonstrates.

          I doubt anyone here with this problem has 100 identical, and very simple stages as your example demonstrates. Why don't you try having your example create real stages that have multiple-condition when clauses and post clauses with multiple post sub-clauses in them and see how many stages you can get.

          But moreover, how do you propose your solution solves the problem for people with various different Build, Test and Deployment stages utilising a looping pipeline generator such as you propose?

          So while you can be congratulated on having a 100-stage pipeline, you have to admit that they are not 100 useful and unique stages, are they?

          Can you point to your real-world useful Jenkinsfile and pipeline library where you implement your proposed technique so that we can all see what we are doing wrong?

          Brian J Murrell added a comment - - edited llibicpep Your explanation and example comment of how the rest of us are all doing CI/CD wrong seems to assume everyone is running very simple and identical stages as your loopStagesDeclarative.groovy  example demonstrates. I doubt anyone here with this problem has 100 identical, and very simple stages as your example demonstrates. Why don't you try having your example create real stages that have multiple-condition  when  clauses and post clauses with multiple post sub-clauses in them and see how many stages you can get. But moreover, how do you propose your solution solves the problem for people with various different Build, Test and Deployment stages utilising a looping pipeline generator such as you propose? So while you can be congratulated on having a 100-stage pipeline, you have to admit that they are not 100 useful and unique stages, are they? Can you point to your real-world useful Jenkinsfile and pipeline library where you implement your proposed technique so that we can all see what we are doing wrong?

          Maybe somebody (at jenkins-ci.org) can tell us all here if there is any hope of this ever being fixed or if this is the end of the road for Jenkins for anyone needing anything more than trivial pipelines, and who have already factored out their entire pipelines into libraries such that their Jenkinsfile does nothing more than orchestrate stages to call library functions on agents when conditions are right for that stage to run.

          Brian J Murrell added a comment - Maybe somebody (at jenkins-ci.org) can tell us all here if there is any hope of this ever being fixed or if this is the end of the road for Jenkins for anyone needing anything more than trivial pipelines, and who have already factored out their entire pipelines into libraries such that their Jenkinsfile does nothing more than orchestrate stages to call library functions on agents when conditions are right for that stage to run.

          Dee Kryvenko added a comment -

          brianjmurrell I never said stages must be identical or similar for this to work. I run very complex CICD platform based on Jenkins that supports CI for ~20 platforms types (maven, gradle, npm, python, golang, dotnet, php, ruby, docker, chef cookbooks, helm, terraform, etc) with various CD deployment methods (chef, terraform, helm, ECS, codedeploy, etc). It allows various combinations of these CI and CD and quality gates in between stages (linting, sonar, integration testing, cost analysis and various security scans), and it manages about ~300 applications.

          I can't solve your problems for you. My example on GitHub obviously was not a real world example and it's merely purpose is to demonstrate the concept. I can't just share my proprietary code with you, I put some effort on my personal free time to put that example together. Yet I am pretty sure it is sufficient for anyone with minimal programming experience to understand what am I talking about. At the end of the day abstractions, templating and decomposition aren't exactly new concepts.

          I can't say I'm always happy with Jenkins and it really feels like a XIX century tool sometimes, but this particular problem a lot of people are moaning about in this ticket is really easily solved and avoided - should anyone put at least some effort into design instead of ad-hoc scripting whatever comes into their mind first. It gets pretty bad pretty fast if technical debt levels not managed.

          If you didn't do your due diligence at a time and now the system collapses on you like that - there aren't many people to blame in that.

          Dee Kryvenko added a comment - brianjmurrell I never said stages must be identical or similar for this to work. I run very complex CICD platform based on Jenkins that supports CI for ~20 platforms types (maven, gradle, npm, python, golang, dotnet, php, ruby, docker, chef cookbooks, helm, terraform, etc) with various CD deployment methods (chef, terraform, helm, ECS, codedeploy, etc). It allows various combinations of these CI and CD and quality gates in between stages (linting, sonar, integration testing, cost analysis and various security scans), and it manages about ~300 applications. I can't solve your problems for you. My example on GitHub obviously was not a real world example and it's merely purpose is to demonstrate the concept. I can't just share my proprietary code with you, I put some effort on my personal free time to put that example together. Yet I am pretty sure it is sufficient for anyone with minimal programming experience to understand what am I talking about. At the end of the day abstractions, templating and decomposition aren't exactly new concepts. I can't say I'm always happy with Jenkins and it really feels like a XIX century tool sometimes, but this particular problem a lot of people are moaning about in this ticket is really easily solved and avoided - should anyone put at least some effort into design instead of ad-hoc scripting whatever comes into their mind first. It gets pretty bad pretty fast if technical debt levels not managed. If you didn't do your due diligence at a time and now the system collapses on you like that - there aren't many people to blame in that.

          Dee Kryvenko added a comment -

          Let me give you another hint.

          Stop thinking about CICD in terms of the stages and conditions when to run them. It's not automation. It's mechanization.

          Think about CICD in terms of what do you want to achieve - you want to lint source code, build an artifact, test it, build/update env, deploy the artifact there, test, scan, etc....

          Dee Kryvenko added a comment - Let me give you another hint. Stop thinking about CICD in terms of the stages and conditions when to run them. It's not automation. It's mechanization. Think about CICD in terms of what do you want to achieve - you want to lint source code, build an artifact, test it, build/update env, deploy the artifact there, test, scan, etc....

          Liam Newman added a comment -

          llibicpep

          brianjmurrell

          I'd appreciate if both of you would take a minute to stop, calm down, and review the Jenkins Code of Conduct . 

          Please treat each other with respect and kindness.  We're all trying to make the project better and help each other out. 

           

          Brian,

          I've been meaning to take another swing at improving this.  I'll take another look at it this week.

           

          Liam Newman added a comment - llibicpep brianjmurrell I'd appreciate if both of you would take a minute to stop, calm down, and review the Jenkins Code of Conduct  .  Please treat each other with respect and kindness.  We're all trying to make the project better and help each other out.    Brian, I've been meaning to take another swing at improving this.  I'll take another look at it this week.  

          bitwiseman, I want to thank you for trying to cool things down here.

           

          There has certainly been a lot of contention about the importance of this ticket in the comments.  It's really frustrating to run into the "Method Code too Large" error and we can all get a little hotheaded about something like this. I too have run into this issue myself and I've had many times where I have had refactor my pipeline into a way that makes it very hard to read and to maintain. 

           

          I really hope you are able improve this. Even if you aren't able eliminate or reduce this problem, it would be very helpful to check without having to run it. I use the pipeline-model-converter/validate to lint my pipeline but it won't tell me if Jenkins can handle my pipeline until it's run. 

          Henry Borchers added a comment - bitwiseman , I want to thank you for trying to cool things down here.   There has certainly been a lot of contention about the importance of this ticket in the comments.  It's really frustrating to run into the "Method Code too Large" error and we can all get a little hotheaded about something like this. I too have run into this issue myself and I've had many times where I have had refactor my pipeline into a way that makes it very hard to read and to maintain.    I really hope you are able improve this. Even if you aren't able eliminate or reduce this problem, it would be very helpful to check without having to run it. I use the pipeline-model-converter/validate to lint my pipeline but it won't tell me if Jenkins can handle my pipeline until it's run. 

          Jim Castillo added a comment -

          Agree, we run into this often.  While doable to refactor, which we have done, and continually do every 2 months or so, it has caused a lot of time and effort to maintain as well as undesirable obfuscated build and deploy code.

          Yes, it works to refactor, but that doesn't seem to be what people are asking for or need.   Or at least for us, we would like to see alternatives then refactor.

          I appreciate the time and effort that goes into maintaining Open Source and love to support Jenkins and the community so I want to offer my thanks.

           

           

           

           

           

           

          Jim Castillo added a comment - Agree, we run into this often.  While doable to refactor, which we have done, and continually do every 2 months or so, it has caused a lot of time and effort to maintain as well as undesirable obfuscated build and deploy code. Yes, it works to refactor, but that doesn't seem to be what people are asking for or need.   Or at least for us, we would like to see alternatives then refactor. I appreciate the time and effort that goes into maintaining Open Source and love to support Jenkins and the community so I want to offer my thanks.            

          Even with all of the bad effects of having to refactor, such as the obfuscation and indirection of having so much code in so many places, (a Jenkinsfile, libraries, etc.), refactoring itself has a finite limitation to it's effectiveness as a solution.

          https://github.com/daos-stack/daos/blob/master/Jenkinsfile is a Jenkinsfile that is on the verge of Method Code too Large (I know, because I am trying to add a new Build stage and am getting that error) and as you can see in it, it is merely a framework of a Jenksinfile that calls out to library functions to do all of it's work.  I don't know that there is much opportunity for more refactoring in that file.  It's already a Jenkinsfile of single-line steps.

          What do you do when you have already factored all of the functionality that you can out of your Jenkinsfile and still hit the error?

           

          Brian J Murrell added a comment - Even with all of the bad effects of having to refactor, such as the obfuscation and indirection of having so much code in so many places, (a  Jenkinsfile , libraries, etc.), refactoring itself has a finite limitation to it's effectiveness as a solution. https://github.com/daos-stack/daos/blob/master/Jenkinsfile  is a Jenkinsfile  that is on the verge of  Method Code too Large  (I know, because I am trying to add a new Build stage and am getting that error) and as you can see in it, it is merely a framework of a Jenksinfile  that calls out to library functions to do all of it's work.  I don't know that there is much opportunity for more refactoring in that file.  It's already a Jenkinsfile of single-line steps. What do you do when you have already factored all of the functionality that you can out of your Jenkinsfile and still hit the error?  

          Carsten Mück added a comment -

          Haven't been in the materia for long time, but back when I had the problem it helped to source out methods into another file and just load that file.
          The Method code too large exception seems to just appear on loading the jenkinsfile, so when you load the other Jenkinsfile (after checking it out from an scm or whereever you have put the extra code) it is free to load that file without the exception.

          So back then my solution was to have the initial Jenkinsfile which calls a Jenkinsilfe.method.groovy after chechking that out

          Hope this helps someone, even though it is no clean solution as some errors just appear when the file is loaded (like simple compilation errors can be hidden until then)

          Carsten Mück added a comment - Haven't been in the materia for long time, but back when I had the problem it helped to source out methods into another file and just load that file. The Method code too large exception seems to just appear on loading the jenkinsfile, so when you load the other Jenkinsfile (after checking it out from an scm or whereever you have put the extra code) it is free to load that file without the exception. So back then my solution was to have the initial Jenkinsfile which calls a Jenkinsilfe.method.groovy after chechking that out Hope this helps someone, even though it is no clean solution as some errors just appear when the file is loaded (like simple compilation errors can be hidden until then)

          mueck You wouldn't have a more concrete example of your solution that you could point at would you?  Your work-around sounds interesting but I am not sure I am familiar with the methodology that you are describing.

          Brian J Murrell added a comment - mueck You wouldn't have a more concrete example of your solution that you could point at would you?  Your work-around sounds interesting but I am not sure I am familiar with the methodology that you are describing.

          Carsten Mück added a comment -

          I currently don't have a good example at hand, but this stack overflow does show it a bit.
          https://stackoverflow.com/a/51780707

          If you load a file and call a method which by itself calls the methods you have in your git example, then you already saved one call that would lead to the Method Code too large exception. And you could also call even more methods in that one method inside you main Jenkinsfile

          Carsten Mück added a comment - I currently don't have a good example at hand, but this stack overflow does show it a bit. https://stackoverflow.com/a/51780707 If you load a file and call a method which by itself calls the methods you have in your git example, then you already saved one call that would lead to the Method Code too large exception. And you could also call even more methods in that one method inside you main Jenkinsfile

          Dee Kryvenko added a comment -

          Brian, indirection and abstraction is not always an obfuscation and is not always bad. 1297 lines of code in the Jenkinsfile by your link - not exactly readable and maintainable and is a sign of lots of duplications. Here is a few suggestions to start with:

          1. Just to address the big elephant in the room - this first ~300 lines of Jenkisfile is code and code doesn't belong to a Jenkinsfile.
          2. Most of the build stages (~500 lines) are going against DRY principle - it is basically the same code with small tweaks per platform. It can be defined in a library. Any "block" in a Jenkinsfile basically nothing more than a Groovy closure so it is perfectly fine to do some code generation and return closure by input parameter[s] as a stage body from the lib step. Your Jenkinsfile might be looking like this:
          stages {
           stage('Build RPM on CentOS 7', getBuildStage('centos7'))
           stage('Build RPM on Leap 15', getBuildStage('...'))
           stage('Build on CentOS 7', getBuildStage('...'))
           stage('Build on CentOS 7 Bullseye', getBuildStage('...'))
           stage('Build on CentOS 7 debug', getBuildStage('...'))
           stage('Build on CentOS 7 release', getBuildStage('...'))
           stage('Build on CentOS 7 with Clang', getBuildStage('...'))
           stage('Build on Ubuntu 20.04 with Clang', getBuildStage('...'))
           stage('Build on Leap 15', getBuildStage('...'))
           stage('Build on Leap 15 with Clang', getBuildStage('...'))
           stage('Build on Leap 15 with Intel-C and TARGET_PREFIX', getBuildStage('...'))
          }

          ~500 lines to ~10 lines reduction right there.
          That potentially applies to the test stages as well - haven't looked closely to whether they are also violating DRY or actually different. Though worth mentioning literally any stage can be stored in a lib, no matter reusable or not. Having Jenkinsfile as nice and clean orchestrator and hiding implementation somewhere almost always is a good idea. The entire `pipeline` and it's body can be sourced from a lib.

          1. Is that many stages actually needed? There might be an opportunity to rely more on a feature flags vs separate binaries, which makes sense not only from the CI point of view but also helps reducing amount of time needed to test everything as well as the overall complexity.
          2. And like I said, most likely you are having more than one repository so having implementation in the Jenkinsfile leads to lots of code duplication. That is to say, all the suggestions I were making is just a common sense suggestions and should have been done NOT because of this "Method code too large" error but because it does makes sense to do. When you want to reuse some code, you don't copy-paste it do you? You publish it as a library and then consume as a dependency. Jenkinsfiles are not any different. It should have been done from the get go and not in a response of a system collapse. You don't code your projects in a single file without architecture and design and start splitting it up as an aftermath to the issues, do you? Why Jenkinsfile is not like that? And I did walked through other repos in your Org to confirm what I'm saying, and I found out you actually perfectly understanding what I'm saying here as all of your other repos already using something like `packageBuildingPipelineDAOS`, so I am not entirely sure what is this conversation is all about. Whether an abstraction you came up with feels like an obfuscation or a simplification is totally up to how you implement it.

          Dee Kryvenko added a comment - Brian, indirection and abstraction is not always an obfuscation and is not always bad. 1297 lines of code in the Jenkinsfile by your link - not exactly readable and maintainable and is a sign of lots of duplications. Here is a few suggestions to start with: Just to address the big elephant in the room - this first ~300 lines of Jenkisfile is code and code doesn't belong to a Jenkinsfile. Most of the build stages (~500 lines) are going against DRY principle - it is basically the same code with small tweaks per platform. It can be defined in a library. Any "block" in a Jenkinsfile basically nothing more than a Groovy closure so it is perfectly fine to do some code generation and return closure by input parameter [s] as a stage body from the lib step. Your Jenkinsfile might be looking like this: stages { stage( 'Build RPM on CentOS 7' , getBuildStage( 'centos7' )) stage( 'Build RPM on Leap 15' , getBuildStage( '...' )) stage( 'Build on CentOS 7' , getBuildStage( '...' )) stage( 'Build on CentOS 7 Bullseye' , getBuildStage( '...' )) stage( 'Build on CentOS 7 debug' , getBuildStage( '...' )) stage( 'Build on CentOS 7 release' , getBuildStage( '...' )) stage( 'Build on CentOS 7 with Clang' , getBuildStage( '...' )) stage( 'Build on Ubuntu 20.04 with Clang' , getBuildStage( '...' )) stage( 'Build on Leap 15' , getBuildStage( '...' )) stage( 'Build on Leap 15 with Clang' , getBuildStage( '...' )) stage( 'Build on Leap 15 with Intel-C and TARGET_PREFIX' , getBuildStage( '...' )) } ~500 lines to ~10 lines reduction right there. That potentially applies to the test stages as well - haven't looked closely to whether they are also violating DRY or actually different. Though worth mentioning literally any stage can be stored in a lib, no matter reusable or not. Having Jenkinsfile as nice and clean orchestrator and hiding implementation somewhere almost always is a good idea. The entire `pipeline` and it's body can be sourced from a lib. Is that many stages actually needed? There might be an opportunity to rely more on a feature flags vs separate binaries, which makes sense not only from the CI point of view but also helps reducing amount of time needed to test everything as well as the overall complexity. And like I said, most likely you are having more than one repository so having implementation in the Jenkinsfile leads to lots of code duplication. That is to say, all the suggestions I were making is just a common sense suggestions and should have been done NOT because of this "Method code too large" error but because it does makes sense to do. When you want to reuse some code, you don't copy-paste it do you? You publish it as a library and then consume as a dependency. Jenkinsfiles are not any different. It should have been done from the get go and not in a response of a system collapse. You don't code your projects in a single file without architecture and design and start splitting it up as an aftermath to the issues, do you? Why Jenkinsfile is not like that? And I did walked through other repos in your Org to confirm what I'm saying, and I found out you actually perfectly understanding what I'm saying here as all of your other repos already using something like `packageBuildingPipelineDAOS`, so I am not entirely sure what is this conversation is all about. Whether an abstraction you came up with feels like an obfuscation or a simplification is totally up to how you implement it.

          Just to address the big elephant in the room - this first ~300 lines of Jenkisfile is code and code doesn't belong to a Jenkinsfile.

          Perhaps, or perhaps not. But it is orthogonal to the issue at hand as (as I understand it) everything outside of the pipeline block does NOT contribute to the actual error everyone is trying to work around here. One can debate the usability of having code located where it's used (only once and doesn't need to be moved for DRY purposes) vs. having to refer to a completely different project/library (such as a pipeline library). But again, that is not germane to and is completely unrelated to the issue at hand here, so let's not get distracted by such a debate.

          Most of the build stages (~500 lines) are going against DRY principle

          I cannot disagree with you here. But this is what Pipeline forces one to do. In theory, Matrix is supposed to be the way to alleviate this however Matrix has a number of aesthetic and (moreover) actual functionality bugs that prevent it from being used.

          While it's clear to me how a whole Jenkinsfile can be put into a library, and re-used such as we do with packageBuildingPipelineDAOS how we would use the stages block as the only thing in a Jenkinsfile as you do above is very unclear to me. I obviously don't have as deep an understanding (nor do I feel I should actually need to – but that's beside the point) of how Jenkins processes it's Jenkinsfile and turns that into Java/Groovy, so maybe you can enlighten me on how that works.  What sort of thing is a getBuildStage() function allowed to actually return?  You seem to be indicating that it can be much more than simply the functionality of a step – such as a whole stage which does not contribute to this Method Code too Large error.

          I have never seen any such construct defined or documented anywhere.  Even the Jenkinsfile as a function in a library is documented.

          Brian J Murrell added a comment - Just to address the big elephant in the room - this first ~300 lines of Jenkisfile is code and code doesn't belong to a Jenkinsfile. Perhaps, or perhaps not. But it is orthogonal to the issue at hand as (as I understand it) everything outside of the pipeline block does NOT contribute to the actual error everyone is trying to work around here. One can debate the usability of having code located where it's used (only once and doesn't need to be moved for DRY purposes) vs. having to refer to a completely different project/library (such as a pipeline library). But again, that is not germane to and is completely unrelated to the issue at hand here, so let's not get distracted by such a debate. Most of the build stages (~500 lines) are going against DRY principle I cannot disagree with you here. But this is what Pipeline forces one to do. In theory, Matrix is supposed to be the way to alleviate this however Matrix has a number of aesthetic and (moreover) actual functionality bugs that prevent it from being used. While it's clear to me how a whole Jenkinsfile can be put into a library, and re-used such as we do with packageBuildingPipelineDAOS how we would use the stages block as the only thing in a Jenkinsfile as you do above is very unclear to me. I obviously don't have as deep an understanding (nor do I feel I should actually need to – but that's beside the point) of how Jenkins processes it's Jenkinsfile and turns that into Java/Groovy, so maybe you can enlighten me on how that works.  What sort of thing is a getBuildStage()  function allowed to actually return?  You seem to be indicating that it can be much more than simply the functionality of a step – such as a whole stage which does not contribute to this  Method Code too Large error. I have never seen any such construct defined or documented anywhere.  Even the Jenkinsfile as a function in a library is documented.

          Dee Kryvenko added a comment - - edited

          Brian, my apology - I just realized what I suggested above will not work for Declarative pipelines, which is the flavor of pipelines you are using. But let me make a few remarks on that:

          1. As you move towards the separation of abstraction and implementation, which in my opinion is inevitable for any more or less complex pipelines, maybe it is worth revisiting what you need a Declarative pipelines for? Think about this, Declarative opinionated syntax was made for human consumption, but in the scenario with lib-pipeline-factory humans don't interact with that syntax anymore - their new interface is a statements like `packageBuildingPipelineDAOS`. These new interfaces needs to be declarative and readable - and you define them on your own to the best of your liking. The pipeline DSL body itself is nothing more than a middle layer now that gets generated by a library and executed by a Jenkins. It's syntax doesn't matter as much anymore. Switching to Scripted pipelines in that scenario opens up the doors for much more flexibility, as Declarative syntax is so much artificially limited (for the sake of being opinionated). Though worth mentioning few features like "Restart from Stage" currently not available for Scripted pipelines, but since they are programmatically generated by a library now - it would be extremely easy to just accept an input variable indicating which stage to restart from and generate a pipeline starting from only that stage.
          2. For Jenkins maintainers - allowing the syntax I suggested above for Declarative pipeline might be a partial solution (or remediation at the very least) to this issue. From technical standpoint I imagine this limitation is artificial, at the end of the day any Jenkinsfile scripted or declarative is a super-set of Groovy and `{...}` expression is always a closure. Allowing library steps to return closure instances in the Declarative pipeline (which can still be validated for declarative syntax), and allowing it to be used as body for `stage`, `when`, `agent` and etc blocks sounds like a good idea to me. In fact now that I'm thinking about this, this probably is the only major obstacle for the people like myself to get rid of scripted pipelines altogether. If I can programmatically generate Declarative pipeline in a library - I can get the best of two worlds.

          Dee Kryvenko added a comment - - edited Brian, my apology - I just realized what I suggested above will not work for Declarative pipelines, which is the flavor of pipelines you are using. But let me make a few remarks on that: As you move towards the separation of abstraction and implementation, which in my opinion is inevitable for any more or less complex pipelines, maybe it is worth revisiting what you need a Declarative pipelines for? Think about this, Declarative opinionated syntax was made for human consumption, but in the scenario with lib-pipeline-factory humans don't interact with that syntax anymore - their new interface is a statements like `packageBuildingPipelineDAOS`. These new interfaces needs to be declarative and readable - and you define them on your own to the best of your liking. The pipeline DSL body itself is nothing more than a middle layer now that gets generated by a library and executed by a Jenkins. It's syntax doesn't matter as much anymore. Switching to Scripted pipelines in that scenario opens up the doors for much more flexibility, as Declarative syntax is so much artificially limited (for the sake of being opinionated). Though worth mentioning few features like "Restart from Stage" currently not available for Scripted pipelines, but since they are programmatically generated by a library now - it would be extremely easy to just accept an input variable indicating which stage to restart from and generate a pipeline starting from only that stage. For Jenkins maintainers - allowing the syntax I suggested above for Declarative pipeline might be a partial solution (or remediation at the very least) to this issue. From technical standpoint I imagine this limitation is artificial, at the end of the day any Jenkinsfile scripted or declarative is a super-set of Groovy and `{...}` expression is always a closure. Allowing library steps to return closure instances in the Declarative pipeline (which can still be validated for declarative syntax), and allowing it to be used as body for `stage`, `when`, `agent` and etc blocks sounds like a good idea to me. In fact now that I'm thinking about this, this probably is the only major obstacle for the people like myself to get rid of scripted pipelines altogether. If I can programmatically generate Declarative pipeline in a library - I can get the best of two worlds.

          Jesse Glick added a comment -

          Taking a break from discussion of impact and workarounds, some thoughts on the implementation side.

          Ultimately this is a limitation of the JVM. You can see something similar without Jenkins, albeit artificially, by just making a Groovy source consisting of, say,

          println(13)
          

          repeated a few thousand times, and trying to run it:

          org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
          General error during class generation: Class file too large!
          
          java.lang.RuntimeException: Class file too large!
          	at groovyjarjarasm.asm.ClassWriter.toByteArray(Unknown Source)
          	at org.codehaus.groovy.control.CompilationUnit$17.call(CompilationUnit.java:827)
          	at org.codehaus.groovy.control.CompilationUnit.applyToPrimaryClassNodes(CompilationUnit.java:1065)
          	at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:603)
          	at org.codehaus.groovy.control.CompilationUnit.processPhaseOperations(CompilationUnit.java:581)
          	at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:558)
          	at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:298)
          	at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:268)
          	at groovy.lang.GroovyShell.parseClass(GroovyShell.java:688)
          	at groovy.lang.GroovyShell.run(GroovyShell.java:517)
          	at groovy.lang.GroovyShell.run(GroovyShell.java:507)
          	at groovy.ui.GroovyMain.processOnce(GroovyMain.java:653)
          	at groovy.ui.GroovyMain.run(GroovyMain.java:384)
          	at groovy.ui.GroovyMain.process(GroovyMain.java:370)
          	at groovy.ui.GroovyMain.processArgs(GroovyMain.java:129)
          	at groovy.ui.GroovyMain.main(GroovyMain.java:109)
          	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
          	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
          	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
          	at java.lang.reflect.Method.invoke(Method.java:498)
          	at org.codehaus.groovy.tools.GroovyStarter.rootLoader(GroovyStarter.java:109)
          	at org.codehaus.groovy.tools.GroovyStarter.main(GroovyStarter.java:131)
          
          1 error
          

          The reason this is particularly noxious for Pipeline script is that the CPS and sandbox transforms result in considerable code bloat, so a method which might have been compiled by a stock Groovy runtime into a few Kb winds up going over the limit. And I suppose the problem is particularly noticeable for Declarative because of the big single implicit Script.run method (i.e., the main body), which the Declarative plugin (pipeline-model-definition) works around in some cases but cannot deal with when you are effectively mixing bits of Scripted into a mostly Declarative structure, as people often do since there is nothing forbidding it (alas).

          The CPS transformer could try to detect methods which are going to be excessively big (somehow?), and then internally rewrite them to use subroutines, shifted to other classes where necessary. It just seems like it could get very complicated to figure out when it is safe to do this and how to do it. If a method body in general contains local variables in various scopes and various sorts of control structures, breaking pieces of it off while preserving semantics is a challenging compiler (de-)optimization. You could probably do a somewhat simpler trick, activated only for big methods, which runs every single instruction as a separate method. The result would definitely be slower to load and run but it might work. Either way, your stack traces are going to look very confusing unless you do further work to hide synthetic stack frames. Offhand I would expect this sort of thing to be on the scale of a Google Summer of Code project, for someone with a deep computer science background, and it would be quite risky (large risk of regression).

          Going forward, I would think this level of effort would be better spent in making jenkinsfile-runner able to run stock Groovy—it already turns off the sandbox transformer, but turning off the CPS transformer would require a bunch of work in workflow-cps—and/or creating a new FlowDefinition which runs stock Groovy in a separate process while flipping control flow back and forth with the controller (a.k.a. external Pipeline execution). There are numerous other problems with the CPS transformation and it does not seem prudent to make massive changes to that code, which was written by Kohsuke before he moved on and which only a handful of people in the world begin to understand.

          Jesse Glick added a comment - Taking a break from discussion of impact and workarounds, some thoughts on the implementation side. Ultimately this is a limitation of the JVM. You can see something similar without Jenkins, albeit artificially, by just making a Groovy source consisting of, say, println(13) repeated a few thousand times, and trying to run it: org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed: General error during class generation: Class file too large! java.lang.RuntimeException: Class file too large! at groovyjarjarasm.asm.ClassWriter.toByteArray(Unknown Source) at org.codehaus.groovy.control.CompilationUnit$17.call(CompilationUnit.java:827) at org.codehaus.groovy.control.CompilationUnit.applyToPrimaryClassNodes(CompilationUnit.java:1065) at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:603) at org.codehaus.groovy.control.CompilationUnit.processPhaseOperations(CompilationUnit.java:581) at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:558) at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:298) at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:268) at groovy.lang.GroovyShell.parseClass(GroovyShell.java:688) at groovy.lang.GroovyShell.run(GroovyShell.java:517) at groovy.lang.GroovyShell.run(GroovyShell.java:507) at groovy.ui.GroovyMain.processOnce(GroovyMain.java:653) at groovy.ui.GroovyMain.run(GroovyMain.java:384) at groovy.ui.GroovyMain.process(GroovyMain.java:370) at groovy.ui.GroovyMain.processArgs(GroovyMain.java:129) at groovy.ui.GroovyMain.main(GroovyMain.java:109) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.codehaus.groovy.tools.GroovyStarter.rootLoader(GroovyStarter.java:109) at org.codehaus.groovy.tools.GroovyStarter.main(GroovyStarter.java:131) 1 error The reason this is particularly noxious for Pipeline script is that the CPS and sandbox transforms result in considerable code bloat, so a method which might have been compiled by a stock Groovy runtime into a few Kb winds up going over the limit. And I suppose the problem is particularly noticeable for Declarative because of the big single implicit Script.run method (i.e., the main body), which the Declarative plugin ( pipeline-model-definition ) works around in some cases but cannot deal with when you are effectively mixing bits of Scripted into a mostly Declarative structure, as people often do since there is nothing forbidding it (alas). The CPS transformer could try to detect methods which are going to be excessively big (somehow?), and then internally rewrite them to use subroutines, shifted to other classes where necessary. It just seems like it could get very complicated to figure out when it is safe to do this and how to do it. If a method body in general contains local variables in various scopes and various sorts of control structures, breaking pieces of it off while preserving semantics is a challenging compiler (de-)optimization. You could probably do a somewhat simpler trick, activated only for big methods, which runs every single instruction as a separate method. The result would definitely be slower to load and run but it might work. Either way, your stack traces are going to look very confusing unless you do further work to hide synthetic stack frames. Offhand I would expect this sort of thing to be on the scale of a Google Summer of Code project, for someone with a deep computer science background, and it would be quite risky (large risk of regression). Going forward, I would think this level of effort would be better spent in making jenkinsfile-runner able to run stock Groovy—it already turns off the sandbox transformer, but turning off the CPS transformer would require a bunch of work in workflow-cps —and/or creating a new FlowDefinition which runs stock Groovy in a separate process while flipping control flow back and forth with the controller (a.k.a. external Pipeline execution ). There are numerous other problems with the CPS transformation and it does not seem prudent to make massive changes to that code, which was written by Kohsuke before he moved on and which only a handful of people in the world begin to understand.

          Brian J Murrell added a comment - - edited

          A big part of the non-technical frustration that this issue causes for me is that this very severe and show-stopping scaling limitation is not documented anywhere (that I came across at least). It's only once one has spent a huge amount of time developing a pipeline that one "runs into" this issue and is first introduced to it.  One then spends at least as much time iteratively trying to refactor one's Jenkinsfile to try to stay under the limit.  But one does this knowing there is still going to be a limit to how much refactoring can be done and that one day, one is going to have refactored as much as one can and still not be able to add any more stages to their Jenkinsfile.

          This scaling limitation needs to be very clearly and prominently documented right at the start of the Jenkins Pipeline documentation.  It's the first thing that somebody diving into Declarative Pipelines should know.  They should know before they even start that Declarative Pipeline has a scale limit and that unless their workflow is small and limited, that one day they will no longer be able to add anything to their pipeline and that on that day, it's the end of life for their Pipeline.

          Heck.  This scaling limitation is not even mentioned in the  Scaling Pipelines document.

          Brian J Murrell added a comment - - edited A big part of the non-technical frustration that this issue causes for me is that this very severe and show-stopping scaling limitation is not documented anywhere (that I came across at least). It's only once one has spent a huge amount of time developing a pipeline that one "runs into" this issue and is first introduced to it.  One then spends at least as much time iteratively trying to refactor one's Jenkinsfile to try to stay under the limit.  But one does this knowing there is still going to be a limit to how much refactoring can be done and that one day, one is going to have refactored as much as one can and still not be able to add any more stages to their Jenkinsfile . This scaling limitation needs to be very clearly and prominently documented right at the start of the Jenkins Pipeline documentation.  It's the first thing that somebody diving into Declarative Pipelines should know.  They should know before they even start that Declarative Pipeline has a scale limit and that unless their workflow is small and limited, that one day they will no longer be able to add anything to their pipeline and that on that day, it's the end of life for their Pipeline. Heck.  This scaling limitation is not even mentioned in the   Scaling Pipelines document.

          kgiloo added a comment -

          brianjmurrell: perhaps the most advisable comment I've ever seen concerning this issue...

           

          kgiloo added a comment - brianjmurrell : perhaps the most advisable comment I've ever seen concerning this issue...  

          Jesse Glick added a comment -

          CpsGroovyShell.reparse could certainly detect this exception message and send you to a jenkins.io redirect link taking you to a page with a description of the causes and suggested workarounds.

          Jesse Glick added a comment - CpsGroovyShell.reparse could certainly detect this exception message and send you to a jenkins.io redirect link taking you to a page with a description of the causes and suggested workarounds.

          jglick, Is there any way that something could be done to see how close our pipelines are to the limit? It is very hard to tell when refactoring my pipelines, what refactors will provide a large enough impact.

          From my own experience, creating new stages, adding post-stages, and options seems to have a much larger effect on getting closer to the limit than adding more steps. However, it wish I could measure it.

           

          Henry Borchers added a comment - jglick , Is there any way that something could be done to see how close our pipelines are to the limit? It is very hard to tell when refactoring my pipelines, what refactors will provide a large enough impact. From my own experience, creating new stages, adding post-stages, and options seems to have a much larger effect on getting closer to the limit than adding more steps. However, it wish I could measure it.  

          Jesse Glick added a comment -

          Is there any way that something could be done to see how close our pipelines are to the limit?

          Other than trying to run the script? Not that I am aware. A complex series of transformations happens between source code and byte code.

          Jesse Glick added a comment - Is there any way that something could be done to see how close our pipelines are to the limit? Other than trying to run the script? Not that I am aware. A complex series of transformations happens between source code and byte code.

          CpsGroovyShell.reparse could certainly detect this exception message and send you to a jenkins.io redirect link taking you to a page with a description of the causes and suggested workarounds.

          If that's in response to my gripe about this scaling limitation being undocumented, then at that point it is way too late.  Finding this ticket and others (including a Cloudbees KB article) from the error message was not terribly difficult.

          My gripe specifically is my investment into Pipeline (without knowing this limitation) only to have hit this wall and now have to pivot and do something completely different. Like going back to upstream/downstream freestyle jobs, or another CI solution, etc. I frankly have no idea my path forward here, but I am too frequently hitting this wall and having refactor my way out of it. All of the low hanging fruit there is gone now.  My ability to continue to refactor is coming to an end, and I think quite soon.  I'm down to factoring multi-condition when clauses into external functions.  Just about everything in my Jenkinsfile is a single call to an external function.

          Does Matrix solve any of this or is Matrix just a high level construct that compiles down into the same amount of bytecode as writing out a series of parallel stages?

          Brian J Murrell added a comment - CpsGroovyShell.reparse could certainly detect this exception message and send you to a jenkins.io redirect link taking you to a page with a description of the causes and suggested workarounds. If that's in response to my gripe about this scaling limitation being undocumented, then at that point it is way too late.  Finding this ticket and others (including a Cloudbees KB article) from the error message was not terribly difficult. My gripe specifically is my investment into Pipeline (without knowing this limitation) only to have hit this wall and now have to pivot and do something completely different. Like going back to upstream/downstream freestyle jobs, or another CI solution, etc. I frankly have no idea my path forward here, but I am too frequently hitting this wall and having refactor my way out of it. All of the low hanging fruit there is gone now.  My ability to continue to refactor is coming to an end, and I think quite soon.  I'm down to factoring multi-condition when clauses into external functions.  Just about everything in my Jenkinsfile is a single call to an external function. Does Matrix solve any of this or is Matrix just a high level construct that compiles down into the same amount of bytecode as writing out a series of parallel stages?

          Other than trying to run the script? Not that I am aware. A complex series of transformations happens between source code and byte code.

          Bummer...

          How hard would it be to add something to similar to "pipeline-model-converter/validate" route from the REST API which checks a pipeline to see if it is too large to run?  I get frustrated that I have to commit changes and wait for Jenkins to pick up the job before I know if my changes are within limit.

          Henry Borchers added a comment - Other than trying to run the script? Not that I am aware. A complex series of transformations happens between source code and byte code. Bummer... How hard would it be to add something to similar to "pipeline-model-converter/validate" route from the REST API which checks a pipeline to see if it is too large to run?  I get frustrated that I have to commit changes and wait for Jenkins to pick up the job before I know if my changes are within limit.

          I think everyone on these boards is missing the point here...

          Stop using JenkinsFile.  it's not sufficient for corporate CI, never has been.  Just have it call out to a single shell script that takes care of everything else.  Stop spinning your wheels.

          The Jenkins/Hudson folks clearly don't care about larger users.  Move on to a newer CI/CD platform that requires less maintenance.  Who wants another pet to take care of?

          Stephen Tunney added a comment - I think everyone on these boards is missing the point here... Stop using JenkinsFile.  it's not sufficient for corporate CI, never has been.  Just have it call out to a single shell script that takes care of everything else.  Stop spinning your wheels. The Jenkins/Hudson folks clearly don't care about larger users.  Move on to a newer CI/CD platform that requires less maintenance.  Who wants another pet to take care of?

          Henry Borchers added a comment - - edited

          Stop using JenkinsFile

          No!

           

          it's not sufficient for corporate CI, never has been

          It could be. It does 90% of everything I could want to do and it keeps getting better every day.

           

          Just have it call out to a single shell script that takes care of everything else. 

          The flow control of the Jenkinsfile is very useful for parallelizing tasks without sacrificing human readability. A simple shell script doesn't do that.

           

          Stop spinning your wheels. 

          I will spin my wheels all I like! Thank you very much

           

          Henry Borchers added a comment - - edited Stop using JenkinsFile No!   it's not sufficient for corporate CI, never has been It could be. It does 90% of everything I could want to do and it keeps getting better every day.   Just have it call out to a single shell script that takes care of everything else.  The flow control of the Jenkinsfile is very useful for parallelizing tasks without sacrificing human readability. A simple shell script doesn't do that.   Stop spinning your wheels.   I will spin my wheels all I like! Thank you very much  

          Liam Newman added a comment - - edited

          brianjmurrell moglimcgrath amuniz henryborchers gregturner spinus1 smd sgardell

          Please take a look at https://github.com/jenkinsci/pipeline-model-definition-plugin/pull/405.

          If any of you can try this change out to see if it fixes your issues, that would be great.

          This is an experimental change, so please do not install it on production servers. See the warning in the PR.

          To test test update:
          You must still set: org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true

          Install the following incrementals:
          (Was 1.7.3-rc1873.537be530946d but updated)
          https://repo.jenkins-ci.org/incrementals/org/jenkinsci/plugins/pipeline-model-api/1.7.3-rc1872.9504c794d213/
          https://repo.jenkins-ci.org/incrementals/org/jenkinsci/plugins/pipeline-model-definition/1.7.3-rc1872.9504c794d213/
          https://repo.jenkins-ci.org/incrementals/org/jenkinsci/plugins/pipeline-model-extensions/1.7.3-rc1872.9504c794d213/
          https://repo.jenkins-ci.org/incrementals/org/jenkinsci/plugins/pipeline-stage-tags-metadata/1.7.3-rc1872.9504c794d213/

          (Sorry, yes, you need to do all four of them.

          Liam Newman added a comment - - edited brianjmurrell moglimcgrath amuniz henryborchers gregturner spinus1 smd sgardell Please take a look at https://github.com/jenkinsci/pipeline-model-definition-plugin/pull/405 . If any of you can try this change out to see if it fixes your issues, that would be great. This is an experimental change, so please do not install it on production servers. See the warning in the PR. To test test update: You must still set: org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true Install the following incrementals: (Was 1.7.3-rc1873.537be530946d but updated) https://repo.jenkins-ci.org/incrementals/org/jenkinsci/plugins/pipeline-model-api/1.7.3-rc1872.9504c794d213/ https://repo.jenkins-ci.org/incrementals/org/jenkinsci/plugins/pipeline-model-definition/1.7.3-rc1872.9504c794d213/ https://repo.jenkins-ci.org/incrementals/org/jenkinsci/plugins/pipeline-model-extensions/1.7.3-rc1872.9504c794d213/ https://repo.jenkins-ci.org/incrementals/org/jenkinsci/plugins/pipeline-stage-tags-metadata/1.7.3-rc1872.9504c794d213/ (Sorry, yes, you need to do all four of them.

          bitwiseman, This looks exciting. Thank you for putting effort into this. 

           

          I'll try to see if I can get this working in a Docker container but to tell you the truth I'm a little anxious because I'm not exactly sure how.

          Henry Borchers added a comment - bitwiseman , This looks exciting. Thank you for putting effort into this.    I'll try to see if I can get this working in a Docker container but to tell you the truth I'm a little anxious because I'm not exactly sure how.

          Liam Newman added a comment -

          henryborchers
          How to get Jenkins working in a Docker container?
          https://batmat.net/2018/09/07/how-to-run-and-upgrade-jenkins-using-the-official-docker-image/ - use "jenkins/jenkins:lts" instead of a specific version.

          Install the HPI files from each of the above links: https://www.jenkins.io/doc/book/managing/plugins/#from-the-web-ui-2 - You can do all four of them and then restart.

          In script console at "Manage Jenkins -> Script Console" paste and run this :
          org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true

          If you restart your Jenkins instance, you'll need to rerun script console setting.

          Then try out your pipeline.

          Liam Newman added a comment - henryborchers How to get Jenkins working in a Docker container? https://batmat.net/2018/09/07/how-to-run-and-upgrade-jenkins-using-the-official-docker-image/ - use "jenkins/jenkins:lts" instead of a specific version. Install the HPI files from each of the above links: https://www.jenkins.io/doc/book/managing/plugins/#from-the-web-ui-2 - You can do all four of them and then restart. In script console at " Manage Jenkins -> Script Console " paste and run this : org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true If you restart your Jenkins instance, you'll need to rerun script console setting. Then try out your pipeline.

          bitwiseman

          I used jenkins/jenkinsfile-runner as the base docker image. Added the hpi files from your links to  /usr/share/jenkins/ref/plugins/ and installed the rest of the required plugins using jenkins-plugin-manager.  I ran  docker with -e JAVA_OPTS="-Dorg.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true"

           

          To be fair, I didn't actually get my jenkinsfile pipeline running. I'm still learning how to use the jenkinsfile-runner but instead of the "Code too Large" errors, I got errors that I didn't have any agents with the correct labels. At least I know that Jenkins was able to load my jenkinsfile without crapping out. I still need configure my dockerfile agent with other docker agents.

           

          Henry Borchers added a comment - bitwiseman I used jenkins/jenkinsfile-runner as the base docker image. Added the hpi files from your links to  /usr/share/jenkins/ref/plugins/ and installed the rest of the required plugins using jenkins-plugin-manager.  I ran  docker with -e JAVA_OPTS="-Dorg.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true"   To be fair, I didn't actually get my jenkinsfile pipeline running. I'm still learning how to use the jenkinsfile-runner but instead of the "Code too Large" errors, I got errors that I didn't have any agents with the correct labels. At least I know that Jenkins was able to load my jenkinsfile without crapping out. I still need configure my dockerfile agent with other docker agents.  

          bitwiseman This really does fix it.

          The only way I can easily test this right now is with the jenkins/jenkinsfile-runner docker image. However, I can tell just by switching the current version the plugins with the ones in your PR and setting SCRIPT_SPLITTING_TRANSFORMATION, the jenkinfile pipeline that was too large was able to work.

          Henry Borchers added a comment - bitwiseman This really does fix it. The only way I can easily test this right now is with the jenkins/jenkinsfile-runner docker image. However, I can tell just by switching the current version the plugins with the ones in your PR and setting SCRIPT_SPLITTING_TRANSFORMATION, the jenkinfile pipeline that was too large was able to work.

          Liam Newman added a comment -

          henryborchers
          Excellent!
          I'm hoping for feedback from more folks such as brianjmurrell before I release this.

          Liam Newman added a comment - henryborchers Excellent! I'm hoping for feedback from more folks such as brianjmurrell before I release this.

          Jesse Glick added a comment -

          To be clear, the proposed fix applies only to Declarative Pipeline.

          Jesse Glick added a comment - To be clear, the proposed fix applies only to Declarative Pipeline.

          I don't have a pipeline exhibiting this problem any more, since my last occurrence and the refactoring[1] I did to resolve it.  That may not last for long though, as new stages are always being added.  I can't say how soon that will be though.

          Ultimately, does this further enhancement of SCRIPT_SPLITTING_TRANSFORMATION still result in a wall where the Jenkinsfile can be once again too big, or does this new mechanism just split as much as is necessary to accommodate any size Jenkinsfile?

          Could the change here make things any worse?  If not, going forward with it is a wash at worst then, yes?

           

          [1]  This time it was moving multi-condition when clauses into functions to simplify the when blocks – causing more unnecessary indirection, IMHO.  Reading my Jenkinsfile is an exercise in jumping all around the Jenkinsfile (to see the value of functions used solely to reduce the pipeline block size, not implement any DRY) file and back and forth between repos (pipeline libraries), etc., which is very annoying).

          Brian J Murrell added a comment - I don't have a pipeline exhibiting this problem any more, since my last occurrence and the refactoring [1] I did to resolve it.  That may not last for long though, as new stages are always being added.  I can't say how soon that will be though. Ultimately, does this further enhancement of SCRIPT_SPLITTING_TRANSFORMATION still result in a wall where the Jenkinsfile can be once again too big, or does this new mechanism just split as much as is necessary to accommodate any size Jenkinsfile ? Could the change here make things any worse?  If not, going forward with it is a wash at worst then, yes?   [1]   This time it was moving multi-condition when clauses into functions to simplify the when blocks – causing more unnecessary indirection, IMHO.  Reading my Jenkinsfile  is an exercise in jumping all around the Jenkinsfile (to see the value of functions used solely to reduce the pipeline block size, not implement any DRY) file and back and forth between repos (pipeline libraries), etc., which is very annoying).

          Liam Newman added a comment - - edited

          brianjmurrell
          There will always be a wall. The limitations in class size are hard coded into the Java Class file format.
          However, this improvement moves the the wall exponentially further out - similar to going from 16-bit integer to 32-bit integer. It is a massive improvement.

          Even if you are not encountering the issue currently, it would be helpful if you tried this new version to make sure it didn't break anything. Further, you could try reverting the last change that you made to your Jenkinsfile to mitigate this and see if it still works. The only change you might need to make is adding "@Field" to script local variable declarations (def varName="value" in the root of the script).

          Liam Newman added a comment - - edited brianjmurrell There will always be a wall. The limitations in class size are hard coded into the Java Class file format. However, this improvement moves the the wall exponentially further out - similar to going from 16-bit integer to 32-bit integer. It is a massive improvement. Even if you are not encountering the issue currently, it would be helpful if you tried this new version to make sure it didn't break anything. Further, you could try reverting the last change that you made to your Jenkinsfile to mitigate this and see if it still works. The only change you might need to make is adding "@Field" to script local variable declarations (def varName="value" in the root of the script).

          I don't have anything to revert.  I'd never commit a Jenkinsfile that doesn't run in Jenkins.  I wouldn't have the approvals to land such a patch.

          So the last time I ran into this, was when I added a stage or two but in the same commit I also refactored to allow the new stage(s) to fit.

          I'm also not sure when my priorities at my day job will allow me time to stand up a non-production Jenkins server to try this out in.  When I do find the time, I will be sure to update here.

          Brian J Murrell added a comment - I don't have anything to revert.  I'd never commit a Jenkinsfile that doesn't run in Jenkins.  I wouldn't have the approvals to land such a patch. So the last time I ran into this, was when I added a stage or two but in the same commit I also refactored to allow the new stage(s) to fit. I'm also not sure when my priorities at my day job will allow me time to stand up a non-production Jenkins server to try this out in.  When I do find the time, I will be sure to update here.

          I thought I'd add that I tested these changes with my skeleton script that reproduced the error for us and it seems to be working. I also can't make these changes to our main Jenkins instance, but I used my docker setup that I have for reproducing errors.

          Previously, I had narrowed down the cause for us to be the number of stages with "when" conditionals. When we get somewhere between 30 and 35 stages with "when" expressions, the error shows up, regardless of any other code in the pipeline (I was able to reproduce with a blank pipeline library with just echo lines).

          I installed the plugins and activated SCRIPT_SPLITTING_TRANSFORMATION, and now I've been able to run the same script with 60 stages without hitting the error. I might be able to go higher, but our use case is far from hitting that many stages.

          I do want to say thanks for keeping this issue active. We've been running a workaround script for a while now but I've been keeping my eye on progress on this issue, and it looks promising so far. I'm anxious to get back to a pure declarative implementation.

          Matthew Brunton added a comment - I thought I'd add that I tested these changes with my skeleton script that reproduced the error for us and it seems to be working. I also can't make these changes to our main Jenkins instance, but I used my docker setup that I have for reproducing errors. Previously, I had narrowed down the cause for us to be the number of stages with "when" conditionals. When we get somewhere between 30 and 35 stages with "when" expressions, the error shows up, regardless of any other code in the pipeline (I was able to reproduce with a blank pipeline library with just echo lines). I installed the plugins and activated SCRIPT_SPLITTING_TRANSFORMATION, and now I've been able to run the same script with 60 stages without hitting the error. I might be able to go higher, but our use case is far from hitting that many stages. I do want to say thanks for keeping this issue active. We've been running a workaround script for a while now but I've been keeping my eye on progress on this issue, and it looks promising so far. I'm anxious to get back to a pure declarative implementation.

          Previously, I had narrowed down the cause for us to be the number of stages with "when" conditionals. When we get somewhere between 30 and 35 stages with "when" expressions, the error shows up, regardless of any other code in the pipeline (I was able to reproduce with a blank pipeline library with just echo lines).

          By suspicion here is that the complexity of the when conditions adds to the amount of bytecode generated, contributing to the Method code too large situation. I moved all of my multi-condition tests into functions so that all of my when conditions are a single call to the function wrapping their actual multi-condition tests.

          I'm anxious to get back to a pure declarative implementation.

          Indeed. Without unnecessary indirection through functions that have no DRY purpose whatsoever and exist solely to reduce the size of the Method code.

          Brian J Murrell added a comment - Previously, I had narrowed down the cause for us to be the number of stages with "when" conditionals. When we get somewhere between 30 and 35 stages with "when" expressions, the error shows up, regardless of any other code in the pipeline (I was able to reproduce with a blank pipeline library with just echo lines). By suspicion here is that the complexity of the when conditions adds to the amount of bytecode generated, contributing to the Method code too large situation. I moved all of my multi-condition tests into functions so that all of my when conditions are a single call to the function wrapping their actual multi-condition tests. I'm anxious to get back to a pure declarative implementation. Indeed. Without unnecessary indirection through functions that have no DRY purpose whatsoever and exist solely to reduce the size of the Method code .

          Doman Panda added a comment - - edited

          I have couple questions about workarrounds:

          1. I saw that many recommend using sharing libraries. How it is different from using functions from the same file but outside of pipelines{} section? 
          2. Some also sugested me that separating functions in Jenkinsfile works only if you wrap arround pipeline{} section with with call() function like this - call(){pipeline{...}}. Is it true?
          3. Is it me or using matrix{} greatly rises the risk of getting such error? I mean it seems to me that i can have much larger pipelines when im not using them. Or is it because i use when{} more?
          4. Does things like number of variables, maps (arrays) or objects defined outside of pipeline script have an impact to this problem?
          5. Some say that using scripting (imperative) syntax does not trigger this problem. Ive never use it. Is it worth to learn it and introduce it in projects? 

          Im asking about those because I really hesitate to use share library solution. Most of my functions are not universal and doesnt make sense for any other projects. Also i use mutlibranch jobs a lot and cant imagine how static libs can work with dynamic branches when building process is strictly co-related with development process (Jenkinsfile changes with code development) and thus cant be separated. Change in code would have to reflect also in shared library. For example when developers add new compilation target, new matrix axis is being added to Jenkinsfile. And sometimes new section. How would this work in multibranch environment and shared library soultion where some branches work with new Jenkinsfile and some still have to be build old way ?

          Doman Panda added a comment - - edited I have couple questions about workarrounds: I saw that many recommend using sharing libraries. How it is different from using functions from the same file but outside of pipelines{} section?  Some also sugested me that separating functions in Jenkinsfile works only if you wrap arround pipeline{} section with with call() function like this - call(){pipeline{...}}. Is it true? Is it me or using matrix{} greatly rises the risk of getting such error? I mean it seems to me that i can have much larger pipelines when im not using them. Or is it because i use when{} more? Does things like number of variables, maps (arrays) or objects defined outside of pipeline script have an impact to this problem? Some say that using scripting (imperative) syntax does not trigger this problem. Ive never use it. Is it worth to learn it and introduce it in projects?  Im asking about those because I really hesitate to use share library solution. Most of my functions are not universal and doesnt make sense for any other projects. Also i use mutlibranch jobs a lot and cant imagine how static libs can work with dynamic branches when building process is strictly co-related with development process (Jenkinsfile changes with code development) and thus cant be separated. Change in code would have to reflect also in shared library. For example when developers add new compilation target, new matrix axis is being added to Jenkinsfile. And sometimes new section. How would this work in multibranch environment and shared library soultion where some branches work with new Jenkinsfile and some still have to be build old way ?

          Liam Newman added a comment -

          1. I saw that many recommend using sharing libraries. How it is different from using functions from the same file but outside of pipelines{} section?

          The underlying code is completely different. For example, functions in the same are internally part of the class for that script, whereas shared library functions are in their own classes.

          2. Some also sugested me that separating functions in Jenkinsfile works only if you wrap arround pipeline{} section with with call() function like this - call(){pipeline{...}}. Is it true?

          I have no idea what syntax you are referring to. Do you mean putting the pipeline in a shared library?

          3. s it me or using matrix{} greatly rises the risk of getting such error? I mean it seems to me that i can have much larger pipelines when im not using them. Or is it because i use when{} more?

          No, matrix doesn't cause this, it only makes it easier to run into this. If you create a the same pipeline manually as what is generated using matrix, you'd get the same issue. But you would also have much longer and repetitive Jenkinsfile.

          4. Does things like number of variables, maps (arrays) or objects defined outside of pipeline script have an impact to this problem?

          Those things do not cause this problem, but their presence can make it harder for the declarative engine to mitigate this problem.

          5. Some say that using scripting (imperative) syntax does not trigger this problem. Ive never use it. Is it worth to learn it and introduce it in projects?

          This is false. Scripted pipeline syntax can also encounter this issue, but it is less common due to there not being an extra layer like there is in Declarative. However, when scripted pipeline do encounter this problem, it is purely up to the writers of that script to workaround the problem. In Declarative, I have been able to process the pipeline code to transparently workaround the issue in many cases (with SCRIPT_SPLITTING_TRANSFORMATION).

          Liam Newman added a comment - 1. I saw that many recommend using sharing libraries. How it is different from using functions from the same file but outside of pipelines{} section? The underlying code is completely different. For example, functions in the same are internally part of the class for that script, whereas shared library functions are in their own classes. 2. Some also sugested me that separating functions in Jenkinsfile works only if you wrap arround pipeline{} section with with call() function like this - call(){pipeline{...}}. Is it true? I have no idea what syntax you are referring to. Do you mean putting the pipeline in a shared library? 3. s it me or using matrix{} greatly rises the risk of getting such error? I mean it seems to me that i can have much larger pipelines when im not using them. Or is it because i use when{} more? No, matrix doesn't cause this, it only makes it easier to run into this. If you create a the same pipeline manually as what is generated using matrix, you'd get the same issue. But you would also have much longer and repetitive Jenkinsfile. 4. Does things like number of variables, maps (arrays) or objects defined outside of pipeline script have an impact to this problem? Those things do not cause this problem, but their presence can make it harder for the declarative engine to mitigate this problem. 5. Some say that using scripting (imperative) syntax does not trigger this problem. Ive never use it. Is it worth to learn it and introduce it in projects? This is false. Scripted pipeline syntax can also encounter this issue, but it is less common due to there not being an extra layer like there is in Declarative. However, when scripted pipeline do encounter this problem, it is purely up to the writers of that script to workaround the problem. In Declarative, I have been able to process the pipeline code to transparently workaround the issue in many cases (with SCRIPT_SPLITTING_TRANSFORMATION).

          Paweł added a comment - - edited

          Greetings,

          Getting the error by the the sheer amount of "when" in pipeline.
          Test pipeline with 35 booleanParam and 35 stages with " when {expresssion {return{params.Foo}}
          I tested Jenkins 2.235.5 and plugins in version 1.7.1.

          I installed
          https://repo.jenkins-ci.org/incrementals/org/jenkinsci/plugins/pipeline-model-api/1.7.3-rc1872.9504c794d213/
          https://repo.jenkins-ci.org/incrementals/org/jenkinsci/plugins/pipeline-model-definition/1.7.3-rc1872.9504c794d213/
          https://repo.jenkins-ci.org/incrementals/org/jenkinsci/plugins/pipeline-model-extensions/1.7.3-rc1872.9504c794d213/
          https://repo.jenkins-ci.org/incrementals/org/jenkinsci/plugins/pipeline-stage-tags-metadata/1.7.3-rc1872.9504c794d213

          then run
          org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true
          and getting the new error:

          org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
          General error during semantic analysis: SCRIPT_SPLITTING_TRANSFORMATION is incompatible with local variable declarations. Add the the '@Field' annotation to local variable declarations: org.codehaus.groovy.ast.expr.DeclarationExpression@26fafdbf[org.codehaus.groovy.ast.expr.VariableExpression@49600128[variable: failedStages]("=" at 1:1:  "=" )org.codehaus.groovy.ast.expr.ListExpression@5b30fe0e[]].
          

          errorIncomaptiblewithlocalvar.txt

          Paweł added a comment - - edited Greetings, Getting the error by the the sheer amount of "when" in pipeline. Test pipeline with 35 booleanParam and 35 stages with " when {expresssion {return{params.Foo}} I tested Jenkins 2.235.5 and plugins in version 1.7.1. I installed https://repo.jenkins-ci.org/incrementals/org/jenkinsci/plugins/pipeline-model-api/1.7.3-rc1872.9504c794d213/ https://repo.jenkins-ci.org/incrementals/org/jenkinsci/plugins/pipeline-model-definition/1.7.3-rc1872.9504c794d213/ https://repo.jenkins-ci.org/incrementals/org/jenkinsci/plugins/pipeline-model-extensions/1.7.3-rc1872.9504c794d213/ https://repo.jenkins-ci.org/incrementals/org/jenkinsci/plugins/pipeline-stage-tags-metadata/1.7.3-rc1872.9504c794d213 then run org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true and getting the new error: org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed: General error during semantic analysis: SCRIPT_SPLITTING_TRANSFORMATION is incompatible with local variable declarations. Add the the '@Field' annotation to local variable declarations: org.codehaus.groovy.ast.expr.DeclarationExpression@26fafdbf[org.codehaus.groovy.ast.expr.VariableExpression@49600128[variable: failedStages]( "=" at 1:1: "=" )org.codehaus.groovy.ast.expr.ListExpression@5b30fe0e[]]. errorIncomaptiblewithlocalvar.txt

          Stephane Odul added a comment -

          We use the declarative pipeline and our main CI pipeline is close to 800 lines with 30 parallel stages all with when clauses.

          Since we use Kubernetes each stage spins its own pod and we have a shared jenkins library to allow to simplify the pod definitions as well as running the individual steps.

          The SCRIPT_SPLITTING_TRANSFORMATION  flag did nothing noticeable.

          As a workaround we tried to group our parallel stage to share the when statements but nesting parallel stages inside parallel stages is not allowed.

          Short of creating a sub-pipeline per parallel group I'm not really seeing a way out of this problem. This is annoying since it will probably add a couple of minutes to our pipelines and we'll have to track and copy test results files between pipelines.

          This seem to be a very big design flaw of the declarative pipelines where the jvm limitations are impacting the ability to use a DSL.

          In the absolute short term we will stop creating more parallel stages which will slow down the productivity of our engineering organization.

          Considering this bug is several years old and seems to impact a lot of organizations, it would be good if the documentation could inform about this problem and warn about what the limits are when using declarative pipelines.

          Stephane Odul added a comment - We use the declarative pipeline and our main CI pipeline is close to 800 lines with 30 parallel stages all with when clauses. Since we use Kubernetes each stage spins its own pod and we have a shared jenkins library to allow to simplify the pod definitions as well as running the individual steps. The SCRIPT_SPLITTING_TRANSFORMATION  flag did nothing noticeable. As a workaround we tried to group our parallel stage to share the when statements but nesting parallel stages inside parallel stages is not allowed. Short of creating a sub-pipeline per parallel group I'm not really seeing a way out of this problem. This is annoying since it will probably add a couple of minutes to our pipelines and we'll have to track and copy test results files between pipelines. This seem to be a very big design flaw of the declarative pipelines where the jvm limitations are impacting the ability to use a DSL. In the absolute short term we will stop creating more parallel stages which will slow down the productivity of our engineering organization. Considering this bug is several years old and seems to impact a lot of organizations, it would be good if the documentation could inform about this problem and warn about what the limits are when using declarative pipelines.

          Bishoy added a comment -

          That's horrible, should we expect fixing this soon?

          Bishoy added a comment - That's horrible, should we expect fixing this soon?

          Liam Newman added a comment - - edited
          Note from the Maintainers

          Please upgrade to at least v1.8.3 or greater and try the feature flag in the description before commenting on this issue.

          jenkinsneveragain
          Did you try what the error suggested? It is pretty specific.

          sodul
          I'm surprised script splitting had no effect.
          Your pipeline still in your Jenkinsfile, right?
          And pipeline is the only thing declared in your Jenkinsfile?
          Could you try this again with the latest release?

          Liam Newman added a comment - - edited Note from the Maintainers Please upgrade to at least v1.8.3 or greater and try the feature flag in the description before commenting on this issue. jenkinsneveragain Did you try what the error suggested? It is pretty specific. sodul I'm surprised script splitting had no effect. Your pipeline still in your Jenkinsfile, right? And pipeline is the only thing declared in your Jenkinsfile? Could you try this again with the latest release?

          Stephane Odul added a comment - - edited

          bitwiseman I missed the version requirement. We have:

          • pipeline-build-step:2.13
          • pipeline-github-lib:1.0
          • pipeline-graph-analysis:1.10
          • pipeline-input-step:2.12
          • pipeline-milestone-step:1.3.1
          • pipeline-model-api:1.7.2
          • pipeline-model-definition:1.7.2
          • pipeline-model-extensions:1.7.2
          • pipeline-rest-api:2.18
          • pipeline-stage-step:2.5
          • pipeline-stage-tags-metadata:1.7.2
          • pipeline-stage-view:2.18
          • pipeline-utility-steps:2.6.1

          We are on Jenkins 2.263.3 LTS and we are encountering an other issue that prevents any job from starting when we update some random plugins. So I'm a little worried about upgrading until JENKINS-64727 is addressed.

          I will try to upgrade over the weekend to minimize potential outages for our internal developers, since the bug is random and not reliably reproducible on other instances.

          As far as the pipeline is concerned we start it with this:

          Map pr_focus = [:]
          String prepare_uuid = UUID.randomUUID().toString().take(8)
          pipeline {
            agent none
            stages {
              stage ('Prepare') {
                  agent {
                      kubernetes {
                          label "prepare-ci-${prepare_uuid}"

          This uuid is for the kubernets plugin later on since our agent definitions need to have a guaranteed unique id. We can probably get a uuid from our library though.

          The Map is a list of stage groups that we enable/disable based on what files have changed. This allow us to skip stages for our PRs if the tests would not be relevant based on the diff. For example if only Python code has changed we don't need to run Golang unittests.

                      steps {
                          script {
                              prepare()
                              sh "jenkins/pr_changes.sh"
                              container('python') {
                                  sh "jenkins/pr_focus.py > pr_focus.txt"
                              }
                              pr_focus = readProperties(file: 'pr_focus.txt')
                              echo "pr_focus: ${pr_focus}"
                          }
                      }
          

          Then later:

                          stage('Go vet') {
                              when {
                                  not { equals expected: '1', actual: pr_focus.SKIP_GO_STAGES }
                                  beforeAgent true
                              }
          

          I think we had to declare the map at the top level to ensure the values would be available to all stages, but if you have a recommendation on an other approach we are open to try that.
           

          Stephane Odul added a comment - - edited bitwiseman  I missed the version requirement. We have: pipeline-build-step:2.13 pipeline-github-lib:1.0 pipeline-graph-analysis:1.10 pipeline-input-step:2.12 pipeline-milestone-step:1.3.1 pipeline-model-api:1.7.2 pipeline-model-definition:1.7.2 pipeline-model-extensions:1.7.2 pipeline-rest-api:2.18 pipeline-stage-step:2.5 pipeline-stage-tags-metadata:1.7.2 pipeline-stage-view:2.18 pipeline-utility-steps:2.6.1 We are on Jenkins 2.263.3 LTS and we are encountering an other issue that prevents any job from starting when we update some random plugins. So I'm a little worried about upgrading until  JENKINS-64727 is addressed. I will try to upgrade over the weekend to minimize potential outages for our internal developers, since the bug is random and not reliably reproducible on other instances. As far as the pipeline is concerned we start it with this: Map pr_focus = [:] String prepare_uuid = UUID.randomUUID().toString().take(8) pipeline {   agent none stages { stage ('Prepare') { agent { kubernetes { label "prepare-ci-${prepare_uuid}" This uuid is for the kubernets plugin later on since our agent definitions need to have a guaranteed unique id. We can probably get a uuid from our library though. The Map is a list of stage groups that we enable/disable based on what files have changed. This allow us to skip stages for our PRs if the tests would not be relevant based on the diff. For example if only Python code has changed we don't need to run Golang unittests. steps { script { prepare() sh "jenkins/pr_changes.sh" container('python') { sh "jenkins/pr_focus.py > pr_focus.txt" } pr_focus = readProperties(file: 'pr_focus.txt') echo "pr_focus: ${pr_focus}" } } Then later: stage('Go vet') { when { not { equals expected: '1', actual: pr_focus.SKIP_GO_STAGES } beforeAgent true } I think we had to declare the map at the top level to ensure the values would be available to all stages, but if you have a recommendation on an other approach we are open to try that.  

          Liam Newman added a comment -

          Ah, I see. The reason script splitting didn't work was because it silently disabled itself when it saw any other expressions in the Jenkinsfile outside of pipeline.

          The new version v1.8.2 allows other expressions, but not bare variable declarations and will throw an informative error rather than silently attempt to continue running with script splitting disabled. In v1.8.2 with script splitting enabled, variable declarations such as Map pr_focus = [:] and String prepare_uuid = UUID.randomUUID().toString().take(8) need to have @Field annotation added to them.

          So, your Jenkinsfile would look like:

          @Field
          Map pr_focus = [:]
          
          @Field
          String prepare_uuid = UUID.randomUUID().toString().take(8)
          
          pipeline { ... }
          

          Liam Newman added a comment - Ah, I see. The reason script splitting didn't work was because it silently disabled itself when it saw any other expressions in the Jenkinsfile outside of pipeline . The new version v1.8.2 allows other expressions, but not bare variable declarations and will throw an informative error rather than silently attempt to continue running with script splitting disabled. In v1.8.2 with script splitting enabled, variable declarations such as Map pr_focus = [:] and String prepare_uuid = UUID.randomUUID().toString().take(8) need to have @Field annotation added to them. So, your Jenkinsfile would look like: @Field Map pr_focus = [:] @Field String prepare_uuid = UUID.randomUUID().toString().take(8) pipeline { ... }

          bitwiseman, sorry for bothering you

          currently I have version 1.8.2, is that mean that SCRIPT_SPLITTING_TRANSFORMATION flag is enabled by default?

          Regarding https://github.com/jenkinsci/pipeline-model-definition-plugin/releases/tag/pipeline-model-definition-1.8.0

          experimental feature that could be activated by setting SCRIPT_SPLITTING_TRANSFORMATION=true

          So, I suspect it is should be disabled by default?

          Currently I'm able to use declared variables outside of `pipeline` block for all stages,

          except these ones which are in `matrix` definition (for these I used `@Field`), that's weird. Is it expected behavior?

          Any recommendation of defining global variables (strings, maps) for Declarative pipelines (in case of some var should be used by several stages)?

          Oleh Moskovych added a comment - bitwiseman , sorry for bothering you currently I have version 1.8.2, is that mean that SCRIPT_SPLITTING_TRANSFORMATION flag is enabled by default? Regarding https://github.com/jenkinsci/pipeline-model-definition-plugin/releases/tag/pipeline-model-definition-1.8.0 experimental feature that could be activated by setting SCRIPT_SPLITTING_TRANSFORMATION=true So, I suspect it is should be disabled by default? Currently I'm able to use declared variables outside of `pipeline` block for all stages, except these ones which are in `matrix` definition (for these I used `@Field`), that's weird. Is it expected behavior? Any recommendation of defining global variables (strings, maps) for Declarative pipelines (in case of some var should be used by several stages)?

          Paweł added a comment - - edited

           

          bitwiseman

          Paweł
          Did you try what the error suggested? It is pretty specific.  

          No, I was not sure and I was testing it in the evening on the production so I've had moved to another workaround quickly.
          https://code-held.com/2020/01/22/jenkins-local-shared-library/ 
          I test it locally and then when implenitng it on prod I've had noticed a method displaying jenkins build status ( build abc is OK).
          I've removed it and replaced by jenkins build in things and testing team is not complying to me on the missing "status OK" method so far

          def failedStages = []  <-- I removed it
          
          pipeline {
              agent none
          
          
          
                                             failedStages.add(env.FAILURE_STAGE)
          
          #removed          
                stage('Results') {
                              steps {
                                  script {
                                      if (failedStages.isEmpty()) {
                                          echo("${env.JOB_NAME} - OK")
                                      } else {
                                          echo(abc.getMessage(failedStages))
                                      }
                                  }
                              }
                          }
                     mattermostNotify(currentBuild.result, abc.getMessage(failedStages), 'ABC')that

          replaced by

                          mattermostNotify("${currentBuild.currentResult}", "Build failed at stage: ${env.FAILURE_STAGE}\nReason: ${env.FAILURE_REASON}", ABC')
          

           

           

          Paweł added a comment - - edited   bitwiseman Paweł Did you try what the error suggested? It is pretty specific. No, I was not sure and I was testing it in the evening on the production so I've had moved to another workaround quickly. https://code-held.com/2020/01/22/jenkins-local-shared-library/   I test it locally and then when implenitng it on prod I've had noticed a method displaying jenkins build status ( build abc is OK). I've removed it and replaced by jenkins build in things and testing team is not complying to me on the missing "status OK" method so far def failedStages = [] <-- I removed it pipeline { agent none failedStages.add(env.FAILURE_STAGE) #removed stage( 'Results' ) { steps { script { if (failedStages.isEmpty()) { echo( "${env.JOB_NAME} - OK" ) } else { echo(abc.getMessage(failedStages)) } } } } mattermostNotify(currentBuild.result, abc.getMessage(failedStages), 'ABC' )that replaced by mattermostNotify( "${currentBuild.currentResult}" , "Build failed at stage: ${env.FAILURE_STAGE}\nReason: ${env.FAILURE_REASON}" , ABC')    

          Liam Newman added a comment -

          moskovych
          Yes, it is disabled by default.

          jenkinsneveragain
          I'm not sure I understand what you're doing there, but it seems unrelated to this issue.
          The error said: "Add the '@Field' annotation to local variable declarations" . Is there some other way this could be said that would be more clear?

          Liam Newman added a comment - moskovych Yes, it is disabled by default. jenkinsneveragain I'm not sure I understand what you're doing there, but it seems unrelated to this issue. The error said: "Add the '@Field' annotation to local variable declarations" . Is there some other way this could be said that would be more clear?

          bitwiseman, ok, so, can you explain this please:

          I'm able to use declared variables outside of `pipeline` block for all stages,

          except these ones which are in `matrix` definition (for these I used `@Field`).

          Is matrix has different logic?

           

          And again: Any recommendation of defining global variables (strings, maps) for Declarative pipelines (in case of some var should be used by several stages)? Documentation?

          Oleh Moskovych added a comment - bitwiseman , ok, so, can you explain this please: I'm able to use declared variables outside of `pipeline` block for all stages, except these ones which are in `matrix` definition (for these I used `@Field`). Is matrix has different logic?   And again: Any recommendation of defining global variables (strings, maps) for Declarative pipelines (in case of some var should be used by several stages)? Documentation?

          Stephane Odul added a comment -

          bitwiseman After adding @Field we got:

          00:00:04.555  WorkflowScript: 42: unable to resolve class Field ,  unable to find class for annotation
          

          With the following plugins:

          - pipeline-build-step:2.13
          - pipeline-github-lib:1.0
          - pipeline-graph-analysis:1.10
          - pipeline-input-step:2.12
          - pipeline-milestone-step:1.3.2
          - pipeline-model-api:1.8.3
          - pipeline-model-definition:1.8.3
          - pipeline-model-extensions:1.8.3
          - pipeline-rest-api:2.19
          - pipeline-stage-step:2.5
          - pipeline-stage-tags-metadata:1.8.3
          - pipeline-stage-view:2.19
          - workflow-aggregator:2.6
          - workflow-api:2.40
          - workflow-basic-steps:2.22
          - workflow-cps:2.87
          - workflow-cps-global-lib:2.17
          - workflow-durable-task-step:2.36
          - workflow-job:2.40
          - workflow-multibranch:2.22
          - workflow-scm-step:2.11
          - workflow-step-api:2.23
          - workflow-support:3.7
          

          Am I missing something? Do you have a full example of a declarative pipeline that uses the `@Field` annotation?

          Stephane Odul added a comment - bitwiseman After adding @Field  we got: 00:00:04.555 WorkflowScript: 42: unable to resolve class Field , unable to find class for annotation With the following plugins: - pipeline-build-step:2.13 - pipeline-github-lib:1.0 - pipeline-graph-analysis:1.10 - pipeline-input-step:2.12 - pipeline-milestone-step:1.3.2 - pipeline-model-api:1.8.3 - pipeline-model-definition:1.8.3 - pipeline-model-extensions:1.8.3 - pipeline-rest-api:2.19 - pipeline-stage-step:2.5 - pipeline-stage-tags-metadata:1.8.3 - pipeline-stage-view:2.19 - workflow-aggregator:2.6 - workflow-api:2.40 - workflow-basic-steps:2.22 - workflow-cps:2.87 - workflow-cps-global-lib:2.17 - workflow-durable-task-step:2.36 - workflow-job:2.40 - workflow-multibranch:2.22 - workflow-scm-step:2.11 - workflow-step-api:2.23 - workflow-support:3.7 Am I missing something? Do you have a full example of a declarative pipeline that uses the `@Field` annotation?

          Oleh Moskovych added a comment - - edited

          sodul, in my case I will needed to add one `import` on the top of file to be able to use it:

          import groovy.transform.Field
          

          and then define this annotation:

          @Field Map dockerParameters = [...]
          

          Oleh Moskovych added a comment - - edited sodul , in my case I will needed to add one `import` on the top of file to be able to use it: import groovy.transform.Field and then define this annotation: @Field Map dockerParameters = [...]

          Stephane Odul added a comment -

          Thanks moskovych it worked perfectly!

          bitwiseman to answer your question about how to handle the error message better. I recommend you put an explicitly spelled out example of a pipeline with the @Field notation and the required import in the documentation as many of us are not groovy experts. The error message should contain a short link to the documentation so we can clearly see how to implement the workaround.

          Stephane Odul added a comment - Thanks moskovych it worked perfectly! bitwiseman to answer your question about how to handle the error message better. I recommend you put an explicitly spelled out example of a pipeline with the @Field notation and the required import in the documentation as many of us are not groovy experts. The error message should contain a short link to the documentation so we can clearly see how to implement the workaround.

          Stephane Odul added a comment -

          bitwiseman we ran into a bit of an issue, which was a facepalm for me in insight. Adding the @Field annotation worked well but now the other branches (we have hundred of branches) that do not have the new annotation are failing.

          I was thinking that the new flag could be behaving in a backward compatible mode. Instead of plan out failing when the @Field notation is missing you could write a warning and fallback onto the existing behavior. This way all Jenkinsfiles that were not priorly failing will keep on working.

          Stephane Odul added a comment - bitwiseman we ran into a bit of an issue, which was a facepalm for me in insight. Adding the @Field annotation worked well but now the other branches (we have hundred of branches) that do not have the new annotation are failing. I was thinking that the new flag could be behaving in a backward compatible mode. Instead of plan out failing when the @Field notation is missing you could write a warning and fallback onto the existing behavior. This way all Jenkinsfiles that were not priorly failing will keep on working.

          Liam Newman added a comment -

          moskovych
          You'll need to provide an example.

          sodul
          Thanks for the feedback. In the final version, I'll definitely do that.
          You can set "org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_ALLOW_LOCAL_VARIABLES=true" . Your fixed pipeline that uses "@Field" will still use the newer/better script splitting and the other pipelines will start working again. FYI, I know this is annoying but it had to be done this way. People were complaining that "script splitting isn't working" without taking the time to read that it doesn't work with local declared variables. This way anyone not using local declared variables (which are not recommended anyway) gets the best possible behavior and any who is using them gets clear feedback about their choices. That feedback needs improvement but it is better than silently not doing what the user has asked for by providing this flag.

          Liam Newman added a comment - moskovych You'll need to provide an example. sodul Thanks for the feedback. In the final version, I'll definitely do that. You can set "org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_ALLOW_LOCAL_VARIABLES=true" . Your fixed pipeline that uses "@Field" will still use the newer/better script splitting and the other pipelines will start working again. FYI, I know this is annoying but it had to be done this way. People were complaining that "script splitting isn't working" without taking the time to read that it doesn't work with local declared variables. This way anyone not using local declared variables (which are not recommended anyway) gets the best possible behavior and any who is using them gets clear feedback about their choices. That feedback needs improvement but it is better than silently not doing what the user has asked for by providing this flag.

          Stephane Odul added a comment -

          bitwiseman
          Some of our pipelines, on an other Jenkins instance are calling other pipelines. Since we need to pass along parameters we have variables such as this before the `pipeline {}` section.

          @Field List parameters = [
              gitParameter(name: 'BRANCH', value: params.BRANCH),
              booleanParam(name: 'SKIP', defaultValue: false)
          ]

          We then have several stages that get the parameters passed around.

                              when { expression { params.SKIP == false } }
                              steps {
                                  build job: 'other', propagate: true, wait: true, parameters: parameters
                              }
          

          Unfortunately we get an exception thrown apparently on params:
          groovy.lang.MissingPropertyException: No such property: params for class: groovy.lang.Binding

          We tried using `env` but that does not seem to be available either.

          This is not something we can easily move to our shared library since the list of parameters is specific to each of these piplines

          Stephane Odul added a comment - bitwiseman Some of our pipelines, on an other Jenkins instance are calling other pipelines. Since we need to pass along parameters we have variables such as this before the `pipeline {}` section. @Field List parameters = [ gitParameter(name: 'BRANCH', value: params.BRANCH), booleanParam(name: 'SKIP', defaultValue: false) ] We then have several stages that get the parameters passed around. when { expression { params.SKIP == false } } steps { build job: 'other', propagate: true, wait: true, parameters: parameters } Unfortunately we get an exception thrown apparently on params : groovy.lang.MissingPropertyException: No such property: params for class: groovy.lang.Binding We tried using `env` but that does not seem to be available either. This is not something we can easily move to our shared library since the list of parameters is specific to each of these piplines

          Oleh Moskovych added a comment - - edited

          bitwiseman, ok, here is small example of my pipeline:

           

          #!/usr/bin/env groovy
          
          //library("jenkins_shared_library@1.0.0")
          
          //@groovy.transform.Field
          String resourcePrefix = new Date().getTime().toString()
          
          //@groovy.transform.Field
          Map dockerParameters = [
              registry: "docker.example.com",
              registryType: "internal",
              images: [
                  image1: [image: "image1", dockerfile: "Dockerfile1"],
                  image2: [image: "image2", dockerfile: "Dockerfile2"]
              ]
          ]
          
          pipeline {
            agent any
            options { skipDefaultCheckout true }
            parameters {
              booleanParam defaultValue: true, description: 'Build & Push image1', name: 'image1'
              booleanParam defaultValue: true, description: 'Build & Push image2', name: 'image2'
            }
          
            stages {
              stage("Prepare") {
                options { skipDefaultCheckout true }
                failFast true
                parallel {
                  stage('Test1') {
                    steps {
                      // All variables available in simple stages and parallel blocks
                      echo "resourcePrefix: ${resourcePrefix}"
                      echo "dockerParameters: ${dockerParameters}"
                    }
                  }
                  stage('Test2') {
                    steps {
                      echo "resourcePrefix: ${resourcePrefix}"
                      echo "dockerParameters: ${dockerParameters}"
                    }
                  }
                }
              }
          
          
              stage("Docker") {
                options { skipDefaultCheckout true }
                matrix {
                  axes {
                    axis {
                      name 'COMPONENT'
                      // Note: these values are the same as described in dockerParameters and params
                      values 'image1', 'image2'
                    }
                  }
                  stages {
                    stage("Build") {
                      when {
                        beforeAgent true
                        expression { params[COMPONENT] == true }
                      }
                      // agent { kubernetes(k8sAgent(name: 'dind')) }
                      steps {
                        // Failing on resourcePrefix/dockerParameters, as it doesn't have Field annotation
                        // Question is: why variables are not available inside matrix?
          
                        echo "resourcePrefix: ${resourcePrefix}"
                        echo "dockerParameters: ${dockerParameters}"
          
                        // Here is one step as example:
                        //dockerBuild(
                        //    image: dockerParameters.images[COMPONENT].image,
                        //    dockerfile: dockerParameters.images[COMPONENT].dockerfile
                        //)
                      }
                    }
                  }
                }
              }
          
            }
          }
          
          

           

          The result is following:

          stage `Prepare` goes fine anyway - as expected.

          stage `Docker` fails (on each matrix stage) with the message:

          groovy.lang.MissingPropertyException: No such property: resourcePrefix for class: groovy.lang.Binding
          

          Until I do not add annotation: `@groovy.transform.Field`

          The same with `dockerParameters`, where I have map of different values, which are similar and have some common values.

          Note: this is just example, there is parameters, which we use in different stages, and copy-pasting all of them to each stage is not appropriate solution - defining them as common/global outside of `pipeline` block is the only way to do it, isn't it?

           

          Additional info: Version of plugin: 1.8.2 / Jenkins version: 2.235.3 / Any splitting params (described in PR #405) or experimental features was never enabled.

           

          Any ideas?

          Oleh Moskovych added a comment - - edited bitwiseman , ok, here is small example of my pipeline:   #!/usr/bin/env groovy //library( "jenkins_shared_library@1.0.0" ) //@groovy.transform.Field String resourcePrefix = new Date().getTime().toString() //@groovy.transform.Field Map dockerParameters = [ registry: "docker.example.com" , registryType: "internal" , images: [ image1: [image: "image1" , dockerfile: "Dockerfile1" ], image2: [image: "image2" , dockerfile: "Dockerfile2" ] ] ] pipeline { agent any options { skipDefaultCheckout true } parameters { booleanParam defaultValue: true , description: 'Build & Push image1' , name: 'image1' booleanParam defaultValue: true , description: 'Build & Push image2' , name: 'image2' } stages { stage( "Prepare" ) { options { skipDefaultCheckout true } failFast true parallel { stage( 'Test1' ) { steps { // All variables available in simple stages and parallel blocks echo "resourcePrefix: ${resourcePrefix}" echo "dockerParameters: ${dockerParameters}" } } stage( 'Test2' ) { steps { echo "resourcePrefix: ${resourcePrefix}" echo "dockerParameters: ${dockerParameters}" } } } } stage( "Docker" ) { options { skipDefaultCheckout true } matrix { axes { axis { name 'COMPONENT' // Note: these values are the same as described in dockerParameters and params values 'image1' , 'image2' } } stages { stage( "Build" ) { when { beforeAgent true expression { params[COMPONENT] == true } } // agent { kubernetes(k8sAgent(name: 'dind' )) } steps { // Failing on resourcePrefix/dockerParameters, as it doesn't have Field annotation // Question is: why variables are not available inside matrix? echo "resourcePrefix: ${resourcePrefix}" echo "dockerParameters: ${dockerParameters}" // Here is one step as example: //dockerBuild( // image: dockerParameters.images[COMPONENT].image, // dockerfile: dockerParameters.images[COMPONENT].dockerfile //) } } } } } } }   The result is following: stage `Prepare` goes fine anyway - as expected. stage `Docker` fails (on each matrix stage) with the message: groovy.lang.MissingPropertyException: No such property: resourcePrefix for class: groovy.lang.Binding Until I do not add annotation: `@groovy.transform.Field` The same with `dockerParameters`, where I have map of different values, which are similar and have some common values. Note: this is just example, there is parameters, which we use in different stages, and copy-pasting all of them to each stage is not appropriate solution - defining them as common/global outside of `pipeline` block is the only way to do it, isn't it?   Additional info: Version of plugin: 1.8.2 / Jenkins version: 2.235.3 / Any splitting params ( described in PR #405 ) or experimental features was never enabled.   Any ideas?

          Stephane Odul added a comment - - edited

          We found a partial workaround for our pipelines that need to pass around parameters. We used to define a variable but somehow with `params` and `env` not available switching to a `get_params()` method so that that these values are available by then seems to do the trick.

          Restart from stage is also working as expected.

          Stephane Odul added a comment - - edited We found a partial workaround for our pipelines that need to pass around parameters. We used to define a variable but somehow with `params` and `env` not available switching to a `get_params()` method so that that these values are available by then seems to do the trick. Restart from stage is also working as expected.

          Liam Newman added a comment - - edited

          sodul
          This is very useful data.
          Can you give an example of what the get_params() form looks like?

          Liam Newman added a comment - - edited sodul This is very useful data. Can you give an example of what the get_params() form looks like?

          Stephane Odul added a comment - - edited
          def get_params() {
              return [
                  gitParameter(name: 'BRANCH', value: params.BRANCH),
                  string(name: 'FOO', value: env.FOO),
                  booleanParam(name: 'SKIP', value: params.SKIP)
              ]
          }
          
          pipeline {
              ...
                  build(job: 'other/pipeline', propagate: true, wait: true, parameters: get_params())
              ...
          }
          

          Some of our pipelines include a more complex get_build_params():

          def get_build_params(name) {
              return [job: name, propagate: true, wait: true, parameters: get_params())]
          }
          

          So the the build call can be as simple as build(get_build_params()) which greatly simplify our jenkinsfiles and reduces copy pasting, especially for some of our test automation pipelines that orchestrate calling many sub-pipelines. Since the various parameters are pipeline specific we do not really want to put it in the library as it would make it much larger than necessary, furthermore the parameters can be branch specific, which makes using a shared library less ideal.

          Initially we had `@field my_params = [...]`, but that was failing since `env` and `params` are now missing. We tried to move the variable definition to the first stage under a script block, but that would break `restart from stage` since values are not persisted. This alternative approach recreates the same data over and over, but that's pretty lightweight and seems to be fully backward/forward compatible.

          Stephane Odul added a comment - - edited def get_params() { return [ gitParameter(name: 'BRANCH', value: params.BRANCH), string(name: 'FOO', value: env.FOO), booleanParam(name: 'SKIP', value: params.SKIP) ] } pipeline { ... build(job: 'other/pipeline', propagate: true, wait: true, parameters: get_params()) ... } Some of our pipelines include a more complex get_build_params() : def get_build_params(name) { return [job: name, propagate: true, wait: true, parameters: get_params())] } So the the build call can be as simple as build(get_build_params()) which greatly simplify our jenkinsfiles and reduces copy pasting, especially for some of our test automation pipelines that orchestrate calling many sub-pipelines. Since the various parameters are pipeline specific we do not really want to put it in the library as it would make it much larger than necessary, furthermore the parameters can be branch specific, which makes using a shared library less ideal. Initially we had `@field my_params = [...] `, but that was failing since `env` and `params` are now missing. We tried to move the variable definition to the first stage under a script block, but that would break `restart from stage` since values are not persisted. This alternative approach recreates the same data over and over, but that's pretty lightweight and seems to be fully backward/forward compatible.

          bitwiseman, I've created new bug as this ticket description doesn't follow with my case of issue:

          https://issues.jenkins.io/browse/JENKINS-64846

          Workaround with Field annotation still force users to fix theirs pipelines, which means - this is breaking changes.

          Oleh Moskovych added a comment - bitwiseman , I've created new bug as this ticket description doesn't follow with my case of issue: https://issues.jenkins.io/browse/JENKINS-64846 Workaround with Field annotation still force users to fix theirs pipelines, which means - this is breaking changes.

          Torsten Kleiber added a comment - - edited

          After upgrading my staging environment from 2.277.3 to 2.277.4 and all of my plugins I get now again the error. On production environment the same pipeline works. Plugin pipeline-model-definition-plugin is v1.8.4 on both instances. The  JVM property is configured in JENKINS_JAVA_OPTIONS in the file /etc/sysconfig/jenkins of both instances. If I look at System Information I can see other entries from JENKINS_JAVA_OPTIONS like java.awt.headless in both environments, but org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION only in my production environment.

          If I run 

          org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true
          

          in script console the job runs till next restart via Jenkins itself, "systemctl restart jenkins.service" or via booting the server, after that it fails again.

          So at the moment I cannot upgrade my production environment anymore.

           

          Torsten Kleiber added a comment - - edited After upgrading my staging environment from 2.277.3 to 2.277.4 and all of my plugins I get now again the error. On production environment the same pipeline works. Plugin pipeline-model-definition-plugin is v1.8.4 on both instances. The  JVM property is configured in JENKINS_JAVA_OPTIONS in the file /etc/sysconfig/jenkins of both instances. If I look at System Information I can see other entries from JENKINS_JAVA_OPTIONS like java.awt.headless in both environments, but org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION only in my production environment. If I run  org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION= true in script console the job runs till next restart via Jenkins itself, "systemctl restart jenkins.service" or via booting the server, after that it fails again. So at the moment I cannot upgrade my production environment anymore.  

          Stephane Odul added a comment - - edited

          For reference we have upgraded to 2.277.4 a couple of weeks ago and everything works normally for us.
          We do have this set on the command line of the server:

          -Dorg.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true

          tkleiber With the monitoring plugin are you able to see the JVM arguments and confirm that you do have that CLI option passed properly?

          Stephane Odul added a comment - - edited For reference we have upgraded to 2.277.4 a couple of weeks ago and everything works normally for us. We do have this set on the command line of the server: -Dorg.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true tkleiber With the monitoring plugin are you able to see the JVM arguments and confirm that you do have that CLI option passed properly?

          I don't need the monitoring plugin, as I normally can see the entry in "Manage Jenkins" -> "System Properties" and see it in production. If I set this on staging via "Manage Jenkins" -> "Script Console", I cannot see it in "System Properties" and it works only till next Jenkins restart.

          I saw the value "true" for entry "org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION" in "System Properties" before my upgrade on staging environment and see it on my production environment, which is not upgraded.

          It seems to me, that you start your Jenkins via command line, this is not the case here.

          We start Jenkins as a service via "systemctl start jenkins.service" on staging (OS SLES 12) and "service jenkins start" (OS SLES 11) on production. So setting "-Dorg.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true" in "JENKINS_JAVA_OPTIONS" in file "/etc/sysconfig/jenkins" seems the only option for me for our usecase and this has worked before in staging and works in production. Are there any other options to set this for starting jenkins as a service?

          Torsten Kleiber added a comment - I don't need the monitoring plugin, as I normally can see the entry in "Manage Jenkins" -> "System Properties" and see it in production. If I set this on staging via "Manage Jenkins" -> "Script Console", I cannot see it in "System Properties" and it works only till next Jenkins restart. I saw the value "true" for entry "org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION" in "System Properties" before my upgrade on staging environment and see it on my production environment, which is not upgraded. It seems to me, that you start your Jenkins via command line, this is not the case here. We start Jenkins as a service via "systemctl start jenkins.service" on staging (OS SLES 12) and "service jenkins start" (OS SLES 11) on production. So setting "-Dorg.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true" in "JENKINS_JAVA_OPTIONS" in file "/etc/sysconfig/jenkins" seems the only option for me for our usecase and this has worked before in staging and works in production. Are there any other options to set this for starting jenkins as a service?

          bitwiseman just a heads up, the issue number referenced within the built-in Jenkins error message related to this issue has a typo:

          It should be this issue, JENKINS-37984 , instead of JENKINS-34987:

          General error during semantic analysis: [JENKINS-34987] SCRIPT_SPLITTING_TRANSFORMATION is an experimental feature of Declarative Pipeline and is incompatible with local variable declarations inside a Jenkinsfile. As a temporary workaround, you can add the '@Field' annotation to these local variable declarations. However, use of Groovy variables in Declarative pipeline, with or without the '@Field' annotation, is not recommended or supported. To use less effective script splitting which allows local variable declarations without changing your pipeline code, set SCRIPT_SPLITTING_ALLOW_LOCAL_VARIABLES=true . Local variable declarations found: [variable names].
          
          java.lang.IllegalStateException: [JENKINS-34987] SCRIPT_SPLITTING_TRANSFORMATION is an experimental feature of Declarative Pipeline and is incompatible with local variable declarations inside a Jenkinsfile. As a temporary workaround, you can add the '@Field' annotation to these local variable declarations. However, use of Groovy variables in Declarative pipeline, with or without the '@Field' annotation, is not recommended or supported. To use less effective script splitting which allows local variable declarations without changing your pipeline code, set SCRIPT_SPLITTING_ALLOW_LOCAL_VARIABLES=true . Local variable declarations found: [variable names]. 
          

          Jeffrey McClain added a comment - bitwiseman  just a heads up, the issue number referenced within the built-in Jenkins error message related to this issue has a typo: It should be this issue,  JENKINS-37984  , instead of JENKINS-34987 : General error during semantic analysis: [JENKINS-34987] SCRIPT_SPLITTING_TRANSFORMATION is an experimental feature of Declarative Pipeline and is incompatible with local variable declarations inside a Jenkinsfile. As a temporary workaround, you can add the '@Field' annotation to these local variable declarations. However, use of Groovy variables in Declarative pipeline, with or without the '@Field' annotation, is not recommended or supported. To use less effective script splitting which allows local variable declarations without changing your pipeline code, set SCRIPT_SPLITTING_ALLOW_LOCAL_VARIABLES= true . Local variable declarations found: [variable names]. java.lang.IllegalStateException: [JENKINS-34987] SCRIPT_SPLITTING_TRANSFORMATION is an experimental feature of Declarative Pipeline and is incompatible with local variable declarations inside a Jenkinsfile. As a temporary workaround, you can add the '@Field' annotation to these local variable declarations. However, use of Groovy variables in Declarative pipeline, with or without the '@Field' annotation, is not recommended or supported. To use less effective script splitting which allows local variable declarations without changing your pipeline code, set SCRIPT_SPLITTING_ALLOW_LOCAL_VARIABLES= true . Local variable declarations found: [variable names].

          Jesse Glick added a comment -

          jmcclain feel free to file a PR to fix that!

          Jesse Glick added a comment - jmcclain feel free to file a PR to fix that!

          Workaround here in Jenkins LTS 2.289.1 and latests plugins does only work when activated via script console, not via JENKINS_JAVA_OPTIONS in /etc/sysconfig/jenkins. So it works again only until restart jenkins.

          Torsten Kleiber added a comment - Workaround here in Jenkins LTS 2.289.1 and latests plugins does only work when activated via script console, not via JENKINS_JAVA_OPTIONS in /etc/sysconfig/jenkins. So it works again only until restart jenkins.

          Stephane Odul added a comment -

          tkleiber We have not upgraded to LTS 2.289.1 yet so cannot confirm, but it seems your /etc/sysconfig/jenkins is not being applied when your Jenkins instance is launched. You need to check that the java process has the -D option passed to its command line. You can check that with the monitoring plugin

          Or if you get shell. access tot he server run ps auxwww

          Stephane Odul added a comment - tkleiber We have not upgraded to LTS 2.289.1 yet so cannot confirm, but it seems your /etc/sysconfig/jenkins is not being applied when your Jenkins instance is launched. You need to check that the java process has the -D option passed to its command line. You can check that with the monitoring plugin Or if you get shell. access tot he server run ps auxwww

          Yes - you are right!

          Because the staging server was upgraded from SLES 11 to 12 too, the service definition has changed from service to systemctl.

          From Installing Jenkins as a Unix daemon - Jenkins - Jenkins Wiki the production server use the "Java Service Wrapper" configuration, which use /etc/sysconfig/jenkins.

          The staging server now use the "OpenSuse" "Linux service - systemd" configuration from this link, which does not use /etc/sysconfig/jenkins anymore.

          Have added now the JENKINS_JAVA_OPTIONS from /etc/sysconfig/jenkins to ExecStart parameter in /usr/lib/systemd/system/jenkins.service directly and all works again!

          Thanks!

          Torsten Kleiber added a comment - Yes - you are right! Because the staging server was upgraded from SLES 11 to 12 too, the service definition has changed from service to systemctl. From  Installing Jenkins as a Unix daemon - Jenkins - Jenkins Wiki the production server use the "Java Service Wrapper" configuration, which use /etc/sysconfig/jenkins. The staging server now use the "OpenSuse" "Linux service - systemd" configuration from this link, which does not use /etc/sysconfig/jenkins anymore. Have added now the JENKINS_JAVA_OPTIONS from /etc/sysconfig/jenkins to ExecStart parameter in /usr/lib/systemd/system/jenkins.service directly and all works again! Thanks!

          Liam Newman added a comment - - edited

          tkleiber
          I'm glad you were able to figure out the problem.

          tkleibersodul moskovych jmcclain
          How is the feature behaving for you? Do you have any feedback, comments, observations? I'm trying to evaluate it's readiness for wider use.

          Liam Newman added a comment - - edited tkleiber I'm glad you were able to figure out the problem. tkleiber sodul moskovych jmcclain How is the feature behaving for you? Do you have any feedback, comments, observations? I'm trying to evaluate it's readiness for wider use.

          Jeffrey McClain added a comment - - edited

          How is the feature behaving for you? Do you have any feedback, comments, observations?

          bitwiseman For reference, initially one of my larger pipelines stopped working, so I tried the 

          org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true

          workaround, however it just resulted in a different message about needing to set 

          SCRIPT_SPLITTING_ALLOW_LOCAL_VARIABLES=true

          in order to use variables defined outside of my pipeline, and even then I still needed to add "import groovy.transform.Field" and "@Field" declarations to my variables and the "env." prefix seemed to stop being recognized by Jenkins for defining environment variables within my pipeline, etc.

          Eventually I just moved some of my pipeline stages to a downstream helper job to get the overall pipeline working again, which I'm guessing is the recommended approach anyways rather than manually changing the experimental settings for SCRIPT_SPLITTING_TRANSFORMATION and SCRIPT_SPLITTING_ALLOW_LOCAL_VARIABLES to true.

          I'd say it definitely seems to be a bit of a breaking change, but if you think the optimization is worth it then I don't really mind. I feel like the error message could be a bit more intuitive though, maybe something like:

          "Your declarative pipeline code is [x]kb which exceeds Java's maximum bytecode size of 64kb and therefore can't be parsed by Jenkins. Consider moving some stages to downstream pipelines or splitting your pipeline into multiple smaller pipelines to reduce your code size to satisfy Java's 64kb limit. Alternately, set org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true as a workaround. See Jenkins-37984 for more details."

          Jeffrey McClain added a comment - - edited How is the feature behaving for you? Do you have any feedback, comments, observations ? bitwiseman  For reference, initially one of my larger pipelines stopped working, so I tried the  org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION= true workaround, however it just resulted in a different message about needing to set  SCRIPT_SPLITTING_ALLOW_LOCAL_VARIABLES= true in order to use variables defined outside of my pipeline, and even then I still needed to add "import groovy.transform.Field" and "@Field" declarations to my variables and the "env." prefix seemed to stop being recognized by Jenkins for defining environment variables within my pipeline, etc. Eventually I just moved some of my pipeline stages to a downstream helper job to get the overall pipeline working again, which I'm guessing is the recommended approach anyways rather than manually changing the experimental settings for SCRIPT_SPLITTING_TRANSFORMATION and SCRIPT_SPLITTING_ALLOW_LOCAL_VARIABLES to true. I'd say it definitely seems to be a bit of a breaking change, but if you think the optimization is worth it then I don't really mind. I feel like the error message could be a bit more intuitive though, maybe something like: "Your declarative pipeline code is  [x] kb which exceeds Java's maximum bytecode size of 64kb and therefore can't be parsed by Jenkins. Consider moving some stages to downstream pipelines or splitting your pipeline into multiple smaller pipelines to reduce your code size to satisfy Java's 64kb limit. Alternately, set org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true as a workaround. See Jenkins-37984 for more details."

          bitwiseman: How is the feature behaving for you? Do you have any feedback, comments, observations?

          With this feature our main declarative multi branch pipeline works only with the SCRIPT_SPLITTING_TRANSFORMATION feature, without it we would have to go to back to classic up-/down-stream approach. We don't use variables outside of the pipeline at the moment. All other pipelines are small enough.

          We use here trunk based development in a monorepo for our main loan application with different backend and frontend technologies. And not all are implemented at till now.

          Despite we try to move a lot of logic to pipeline libraries there remains a lot of stages because of when conditions depending on branching model and repository names (eg. for testing jenkins staging). Furtermore we need different pipeline stages for environments like development, test and production or for different controllers for building on different operating systems.

          One thing we miss at the moment is better parallel support as other systems like UC4 have. Eg. parallel in parallel and the corresponding visualization in blue ocean.

          Torsten Kleiber added a comment - >  bitwiseman : How is the feature behaving for you? Do you have any feedback, comments, observations? With this feature our main declarative multi branch pipeline works only with the SCRIPT_SPLITTING_TRANSFORMATION feature, without it we would have to go to back to classic up-/down-stream approach. We don't use variables outside of the pipeline at the moment. All other pipelines are small enough. We use here trunk based development in a monorepo for our main loan application with different backend and frontend technologies. And not all are implemented at till now. Despite we try to move a lot of logic to pipeline libraries there remains a lot of stages because of when conditions depending on branching model and repository names (eg. for testing jenkins staging). Furtermore we need different pipeline stages for environments like development, test and production or for different controllers for building on different operating systems. One thing we miss at the moment is better parallel support as other systems like UC4 have. Eg. parallel in parallel and the corresponding visualization in blue ocean.

          > bitwiseman: How is the feature behaving for you? Do you have any feedback, comments, observations?

          We are not using the SCRIPT_SPLITTING_TRANSFORMATION set (by default it false, right?).

          Our pipelines mostly use methods/functions from Jenkins Shared library and
          all pipelines contains some global variables before pipeline block (variables with some groovy logic, which used in more than 2 stages, or should be defined as global).
          The example of pipeline you may take from this issue description: JENKINS-64846

          Pipelines are separated from functions, so - no pipeline blocks in call functions for shared library, like it was shown here: JENKINS-64846?focusedCommentId=407258

          bitwiseman, I know, this is beta, but is there any documentation available for description of flags and behavior of pipelines? It would be good to have examples without diving in the plugin source code. Especially with our approach of using groovy outside pipeline block.

          Oleh Moskovych added a comment - > bitwiseman : How is the feature behaving for you? Do you have any feedback, comments, observations? We are not using the SCRIPT_SPLITTING_TRANSFORMATION set (by default it false, right?). Our pipelines mostly use methods/functions from Jenkins Shared library and all pipelines contains some global variables before pipeline block (variables with some groovy logic, which used in more than 2 stages, or should be defined as global). The example of pipeline you may take from this issue description: JENKINS-64846 Pipelines are separated from functions, so - no pipeline blocks in call functions for shared library, like it was shown here: JENKINS-64846?focusedCommentId=407258 bitwiseman , I know, this is beta, but is there any documentation available for description of flags and behavior of pipelines? It would be good to have examples without diving in the plugin source code. Especially with our approach of using groovy outside pipeline block.

          As I want to test a specific library branch I try to use follwing notation: 

           

          @Library('shared-libraries@feature/test-shared-library') _
          
          pipeline {
            // long pipeline here
          }

          Therefore I tried to use following properties combined:

           

          -Dorg.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true -Dorg.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_ALLOW_LOCAL_VARIABLES=true 

          But as soon as I add the second parameter, the first will not work anymore. Is this the intended behavior? So I cannot use local libraries in big pipelines? Or do I have to do this in another way?

           

          Jenkins 2.387.1 on SLES 12.5.

           

          Torsten Kleiber added a comment - As I want to test a specific library branch I try to use follwing notation:    @Library( 'shared-libraries@feature/test-shared-library' ) _ pipeline { // long pipeline here } Therefore I tried to use following properties combined:   -Dorg.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION= true -Dorg.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_ALLOW_LOCAL_VARIABLES= true But as soon as I add the second parameter, the first will not work anymore. Is this the intended behavior? So I cannot use local libraries in big pipelines? Or do I have to do this in another way?   Jenkins 2.387.1 on SLES 12.5.  

          How is this still an issue in 2024?

          Henry Borchers added a comment - How is this still an issue in 2024?

          > How is this still an issue in 2024?

          Because the issue is the result of a fundamental design decision.

          Brian J Murrell added a comment - > How is this still an issue in 2024? Because the issue is the result of a fundamental design decision.

          Dee Kryvenko added a comment -

          > Because the issue is the result of a fundamental design decision.

          Of using Jenkins in the first place. So glad I moved away from it. And I can finally get work done instead of fighting made up issues all the time. Never been happier.

          Dee Kryvenko added a comment - > Because the issue is the result of a fundamental design decision. Of using Jenkins in the first place. So glad I moved away from it. And I can finally get work done instead of fighting made up issues all the time. Never been happier.

          Within the better part of a decade, I have been using declarative pipelines. I have found that they are very expressive as well as very easy to read  and to maintain.  It's more powerful than GitHub Actions yaml files. However, it sounds like from the latest comments that everyone else has abandoned the declarative pipeline. Am I wrong in this?

          I have used scripted sections within my declarative pipelines for things that I can't easily do within the constrains of the declarative style. However,  the idea of making a purely scripted pipeline seems potentially messy. If you've abandoned the declarative pipeline, what have you moved on to instead?

          Henry Borchers added a comment - Within the better part of a decade, I have been using declarative pipelines. I have found that they are very expressive as well as very easy to read  and to maintain.  It's more powerful than GitHub Actions yaml files. However, it sounds like from the latest comments that everyone else has abandoned the declarative pipeline. Am I wrong in this? I have used scripted sections within my declarative pipelines for things that I can't easily do within the constrains of the declarative style. However,  the idea of making a purely scripted pipeline seems potentially messy. If you've abandoned the declarative pipeline, what have you moved on to instead?

          We further use succesfully declarative pipelines with the workaround configured. 

          Torsten Kleiber added a comment - We further use succesfully declarative pipelines with the workaround configured. 

          Dee Kryvenko added a comment -

          > If you've abandoned the declarative pipeline, what have you moved on to instead?

          Never used declarative pipeline to begin with. Jenkins desperately trying to be a "platform". It is not a "platform". It is a cron server with web interface. It can be turned into a "platform" by "platform" engineers, in which case, pipeline would be generated automatically - it doesn't have to be readable or declarative, as that would only add artificial limitations and made up issues. Which it is. Jenkins can never be a "platform" as everyone's last mile challenges are going to be very unique. Jenkins fails to understand that, but it is very successful at alienating "platform" people who at one point were the biggest advocates and helped organizations to adopt it. No more.

          As Jenkins tries to be a "platform", it also tries to be smart about the confused deputy problem. Which, again, not its to solve - it only getting in the way for those of us who actually need to solve it.

          There might be some amount of users for whom Jenkins does solve last mile issues out of the box, and they do not require "platform" engineers. Good for them - but I would argue they are not doing anything complex to begin with, and would probably be better off with something much simpler and less maintenance heavy, like maybe GitHub Actions ARC. The fact that in 2024 Jenkins still cannot even restart without a downtime, not to mention horizontal scaling, not to mention in-memory state direct serialization into XML on the disk, not to mention crazy IOPS utilization, is a joke. They missed the kubernetes memo. Any relatively large Jenkins deployment becomes a maintenance nightmare. I am running GHA ARC for CI and ArgoCD for CD, for over a year now - I maybe spent 30 mins top on it's maintenance the entire year, and my users had zero service interruptions. State is distributed, everything is scaled horizontally... I have so much free time now, to write this message, for example.

          Dee Kryvenko added a comment - > If you've abandoned the declarative pipeline, what have you moved on to instead? Never used declarative pipeline to begin with. Jenkins desperately trying to be a "platform". It is not a "platform". It is a cron server with web interface. It can be turned into a "platform" by "platform" engineers, in which case, pipeline would be generated automatically - it doesn't have to be readable or declarative, as that would only add artificial limitations and made up issues. Which it is. Jenkins can never be a "platform" as everyone's last mile challenges are going to be very unique. Jenkins fails to understand that, but it is very successful at alienating "platform" people who at one point were the biggest advocates and helped organizations to adopt it. No more. As Jenkins tries to be a "platform", it also tries to be smart about the confused deputy problem. Which, again, not its to solve - it only getting in the way for those of us who actually need to solve it. There might be some amount of users for whom Jenkins does solve last mile issues out of the box, and they do not require "platform" engineers. Good for them - but I would argue they are not doing anything complex to begin with, and would probably be better off with something much simpler and less maintenance heavy, like maybe GitHub Actions ARC. The fact that in 2024 Jenkins still cannot even restart without a downtime, not to mention horizontal scaling, not to mention in-memory state direct serialization into XML on the disk, not to mention crazy IOPS utilization, is a joke. They missed the kubernetes memo. Any relatively large Jenkins deployment becomes a maintenance nightmare. I am running GHA ARC for CI and ArgoCD for CD, for over a year now - I maybe spent 30 mins top on it's maintenance the entire year, and my users had zero service interruptions. State is distributed, everything is scaled horizontally... I have so much free time now, to write this message, for example.

          Heiko Nardmann added a comment - Others seem to have same problem: https://confluence.atlassian.com/jirakb/groovy-script-cannot-be-executed-due-to-method-code-too-large-error-1063568679.html

          John Malmberg added a comment -

          Partial fix does not allow you to have the Jenkinsfile specify a different pipeline Library branch because it considers "_" to be a local variable, so it stack dumps.

          Any work arounds to that?

          John Malmberg added a comment - Partial fix does not allow you to have the Jenkinsfile specify a different pipeline Library branch because it considers "_" to be a local variable, so it stack dumps. Any work arounds to that?

            Unassigned Unassigned
            anudeeplalam Anudeep Lalam
            Votes:
            87 Vote for this issue
            Watchers:
            103 Start watching this issue

              Created:
              Updated: