Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-37984

org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed: General error during class generation: Method code too large! error in pipeline Script

    XMLWordPrintable

Details

    Description

      Note from the Maintainers

      There is partial fix for this for Declarative pipelines in pipeline-model-definition-plugin v1.4.0 and later, significantly improved in v1.8.4.  Due to the extent to which it change how pipelines are executed it is turned off by default.  It can be turned on by setting a JVM property (either on the command-line or in Jenkins script console):

      org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true 

      As noted, this still works best with a Jenkinsfile with pipeline directive as the only root item in the file.
      Since v1.8.2 this workaround reports an informative error for pipelines using `def` variables before the pipeline directive. Add a @Field annotation to those declaration.
      This workaround generally does NOT work if the pipeline directive inside a shared library method. If this is a scenario you want, please come join the pipeline authoring SIG and we can discuss.

      Please give it a try and provide feedback. 

      Hi,

      We are getting below error in Pipeline which has some 495 lines of groovy code. Initially we assumed that one of our methods has an issue but once we remove any 30-40 lines of Pipeline groovy, this issue is not coming.

      Can you please suggest a quick workaround. It's a blocker for us.

      org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
      General error during class generation: Method code too large!
      
      java.lang.RuntimeException: Method code too large!
      	at groovyjarjarasm.asm.MethodWriter.a(Unknown Source)
      	at groovyjarjarasm.asm.ClassWriter.toByteArray(Unknown Source)
      	at org.codehaus.groovy.control.CompilationUnit$16.call(CompilationUnit.java:815)
      	at org.codehaus.groovy.control.CompilationUnit.applyToPrimaryClassNodes(CompilationUnit.java:1053)
      	at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:591)
      	at org.codehaus.groovy.control.CompilationUnit.processPhaseOperations(CompilationUnit.java:569)
      	at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:546)
      	at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:298)
      	at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:268)
      	at groovy.lang.GroovyShell.parseClass(GroovyShell.java:688)
      	at groovy.lang.GroovyShell.parse(GroovyShell.java:700)
      	at org.jenkinsci.plugins.workflow.cps.CpsGroovyShell.reparse(CpsGroovyShell.java:67)
      	at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.parseScript(CpsFlowExecution.java:410)
      	at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.start(CpsFlowExecution.java:373)
      	at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:213)
      	at hudson.model.ResourceController.execute(ResourceController.java:98)
      	at hudson.model.Executor.run(Executor.java:410)
      
      1 error
      
      	at org.codehaus.groovy.control.ErrorCollector.failIfErrors(ErrorCollector.java:310)
      	at org.codehaus.groovy.control.CompilationUnit.applyToPrimaryClassNodes(CompilationUnit.java:1073)
      	at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:591)
      	at org.codehaus.groovy.control.CompilationUnit.processPhaseOperations(CompilationUnit.java:569)
      	at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:546)
      	at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:298)
      	at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:268)
      	at groovy.lang.GroovyShell.parseClass(GroovyShell.java:688)
      	at groovy.lang.GroovyShell.parse(GroovyShell.java:700)
      	at org.jenkinsci.plugins.workflow.cps.CpsGroovyShell.reparse(CpsGroovyShell.java:67)
      	at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.parseScript(CpsFlowExecution.java:410)
      	at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.start(CpsFlowExecution.java:373)
      	at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:213)
      	at hudson.model.ResourceController.execute(ResourceController.java:98)
      	at hudson.model.Executor.run(Executor.java:410)
      Finished: FAILURE
      

      Attachments

        1. errorIncomaptiblewithlocalvar.txt
          8 kB
        2. java.png
          java.png
          294 kB
        3. JenkinsCodeTooLarge.groovy
          45 kB
        4. Script_Splitting.groovy
          44 kB
        5. Script_Splittingx10.groovy
          519 kB

        Issue Links

          Activity

            bitwiseman Liam Newman added a comment - - edited

            tkleiber
            I'm glad you were able to figure out the problem.

            tkleibersodul moskovych jmcclain
            How is the feature behaving for you? Do you have any feedback, comments, observations? I'm trying to evaluate it's readiness for wider use.

            bitwiseman Liam Newman added a comment - - edited tkleiber I'm glad you were able to figure out the problem. tkleiber sodul moskovych jmcclain How is the feature behaving for you? Do you have any feedback, comments, observations? I'm trying to evaluate it's readiness for wider use.
            jmcclain Jeffrey McClain added a comment - - edited

            How is the feature behaving for you? Do you have any feedback, comments, observations?

            bitwiseman For reference, initially one of my larger pipelines stopped working, so I tried the 

            org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true

            workaround, however it just resulted in a different message about needing to set 

            SCRIPT_SPLITTING_ALLOW_LOCAL_VARIABLES=true

            in order to use variables defined outside of my pipeline, and even then I still needed to add "import groovy.transform.Field" and "@Field" declarations to my variables and the "env." prefix seemed to stop being recognized by Jenkins for defining environment variables within my pipeline, etc.

            Eventually I just moved some of my pipeline stages to a downstream helper job to get the overall pipeline working again, which I'm guessing is the recommended approach anyways rather than manually changing the experimental settings for SCRIPT_SPLITTING_TRANSFORMATION and SCRIPT_SPLITTING_ALLOW_LOCAL_VARIABLES to true.

            I'd say it definitely seems to be a bit of a breaking change, but if you think the optimization is worth it then I don't really mind. I feel like the error message could be a bit more intuitive though, maybe something like:

            "Your declarative pipeline code is [x]kb which exceeds Java's maximum bytecode size of 64kb and therefore can't be parsed by Jenkins. Consider moving some stages to downstream pipelines or splitting your pipeline into multiple smaller pipelines to reduce your code size to satisfy Java's 64kb limit. Alternately, set org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true as a workaround. See Jenkins-37984 for more details."

            jmcclain Jeffrey McClain added a comment - - edited How is the feature behaving for you? Do you have any feedback, comments, observations ? bitwiseman  For reference, initially one of my larger pipelines stopped working, so I tried the  org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION= true workaround, however it just resulted in a different message about needing to set  SCRIPT_SPLITTING_ALLOW_LOCAL_VARIABLES= true in order to use variables defined outside of my pipeline, and even then I still needed to add "import groovy.transform.Field" and "@Field" declarations to my variables and the "env." prefix seemed to stop being recognized by Jenkins for defining environment variables within my pipeline, etc. Eventually I just moved some of my pipeline stages to a downstream helper job to get the overall pipeline working again, which I'm guessing is the recommended approach anyways rather than manually changing the experimental settings for SCRIPT_SPLITTING_TRANSFORMATION and SCRIPT_SPLITTING_ALLOW_LOCAL_VARIABLES to true. I'd say it definitely seems to be a bit of a breaking change, but if you think the optimization is worth it then I don't really mind. I feel like the error message could be a bit more intuitive though, maybe something like: "Your declarative pipeline code is  [x] kb which exceeds Java's maximum bytecode size of 64kb and therefore can't be parsed by Jenkins. Consider moving some stages to downstream pipelines or splitting your pipeline into multiple smaller pipelines to reduce your code size to satisfy Java's 64kb limit. Alternately, set org.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true as a workaround. See Jenkins-37984 for more details."

            bitwiseman: How is the feature behaving for you? Do you have any feedback, comments, observations?

            With this feature our main declarative multi branch pipeline works only with the SCRIPT_SPLITTING_TRANSFORMATION feature, without it we would have to go to back to classic up-/down-stream approach. We don't use variables outside of the pipeline at the moment. All other pipelines are small enough.

            We use here trunk based development in a monorepo for our main loan application with different backend and frontend technologies. And not all are implemented at till now.

            Despite we try to move a lot of logic to pipeline libraries there remains a lot of stages because of when conditions depending on branching model and repository names (eg. for testing jenkins staging). Furtermore we need different pipeline stages for environments like development, test and production or for different controllers for building on different operating systems.

            One thing we miss at the moment is better parallel support as other systems like UC4 have. Eg. parallel in parallel and the corresponding visualization in blue ocean.

            tkleiber Torsten Kleiber added a comment - >  bitwiseman : How is the feature behaving for you? Do you have any feedback, comments, observations? With this feature our main declarative multi branch pipeline works only with the SCRIPT_SPLITTING_TRANSFORMATION feature, without it we would have to go to back to classic up-/down-stream approach. We don't use variables outside of the pipeline at the moment. All other pipelines are small enough. We use here trunk based development in a monorepo for our main loan application with different backend and frontend technologies. And not all are implemented at till now. Despite we try to move a lot of logic to pipeline libraries there remains a lot of stages because of when conditions depending on branching model and repository names (eg. for testing jenkins staging). Furtermore we need different pipeline stages for environments like development, test and production or for different controllers for building on different operating systems. One thing we miss at the moment is better parallel support as other systems like UC4 have. Eg. parallel in parallel and the corresponding visualization in blue ocean.

            > bitwiseman: How is the feature behaving for you? Do you have any feedback, comments, observations?

            We are not using the SCRIPT_SPLITTING_TRANSFORMATION set (by default it false, right?).

            Our pipelines mostly use methods/functions from Jenkins Shared library and
            all pipelines contains some global variables before pipeline block (variables with some groovy logic, which used in more than 2 stages, or should be defined as global).
            The example of pipeline you may take from this issue description: JENKINS-64846

            Pipelines are separated from functions, so - no pipeline blocks in call functions for shared library, like it was shown here: JENKINS-64846?focusedCommentId=407258

            bitwiseman, I know, this is beta, but is there any documentation available for description of flags and behavior of pipelines? It would be good to have examples without diving in the plugin source code. Especially with our approach of using groovy outside pipeline block.

            moskovych Oleh Moskovych added a comment - > bitwiseman : How is the feature behaving for you? Do you have any feedback, comments, observations? We are not using the SCRIPT_SPLITTING_TRANSFORMATION set (by default it false, right?). Our pipelines mostly use methods/functions from Jenkins Shared library and all pipelines contains some global variables before pipeline block (variables with some groovy logic, which used in more than 2 stages, or should be defined as global). The example of pipeline you may take from this issue description: JENKINS-64846 Pipelines are separated from functions, so - no pipeline blocks in call functions for shared library, like it was shown here: JENKINS-64846?focusedCommentId=407258 bitwiseman , I know, this is beta, but is there any documentation available for description of flags and behavior of pipelines? It would be good to have examples without diving in the plugin source code. Especially with our approach of using groovy outside pipeline block.

            As I want to test a specific library branch I try to use follwing notation: 

             

            @Library('shared-libraries@feature/test-shared-library') _
            
            pipeline {
              // long pipeline here
            }

            Therefore I tried to use following properties combined:

             

            -Dorg.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION=true -Dorg.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_ALLOW_LOCAL_VARIABLES=true 

            But as soon as I add the second parameter, the first will not work anymore. Is this the intended behavior? So I cannot use local libraries in big pipelines? Or do I have to do this in another way?

             

            Jenkins 2.387.1 on SLES 12.5.

             

            tkleiber Torsten Kleiber added a comment - As I want to test a specific library branch I try to use follwing notation:    @Library( 'shared-libraries@feature/test-shared-library' ) _ pipeline { // long pipeline here } Therefore I tried to use following properties combined:   -Dorg.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_TRANSFORMATION= true -Dorg.jenkinsci.plugins.pipeline.modeldefinition.parser.RuntimeASTTransformer.SCRIPT_SPLITTING_ALLOW_LOCAL_VARIABLES= true But as soon as I add the second parameter, the first will not work anymore. Is this the intended behavior? So I cannot use local libraries in big pipelines? Or do I have to do this in another way?   Jenkins 2.387.1 on SLES 12.5.  

            People

              Unassigned Unassigned
              anudeeplalam Anudeep Lalam
              Votes:
              80 Vote for this issue
              Watchers:
              95 Start watching this issue

              Dates

                Created:
                Updated: