• Icon: Bug Bug
    • Resolution: Fixed
    • Icon: Critical Critical
    • workflow-cps-plugin

      I'm experiencing some odd behaviour with the parallel step related to variable scoping. The following minimal pipeline script demonstrates my problem.

      def fn = { val -> println val }
      
      parallel([
        a: { fn('a') },
        b: { fn('b') }
      ])
      

      Expected output

      a
      b
      

      (or b then a, order of execution should be undefined)

      Actual output

      b
      b
      

        1. console_output_no_node_step.png
          console_output_no_node_step.png
          56 kB
        2. console_output.png
          console_output.png
          137 kB
        3. pipeline_step.png
          pipeline_step.png
          222 kB
        4. pipeline_steps.png
          pipeline_steps.png
          137 kB

          [JENKINS-38268] Parallel step and closure scope

          Awesome work!
          Now that both this bug and the binary Groovy methods bug are fixed, the experience for pipeline developers will be much better!
          It will spare everyone a lot of, 'Argh! Why isn't this working? What am I doing wrong?'-moments/hours .

          Mark Lagendijk added a comment - Awesome work! Now that both this bug and the binary Groovy methods bug are fixed, the experience for pipeline developers will be much better! It will spare everyone a lot of, 'Argh! Why isn't this working? What am I doing wrong?'-moments/hours .

          Phil McArdle added a comment -

          Apologies if there's a separate bug for this, but I thought this one would address it too. If I adapt the sample pipeline in the description like so:

          def fn = { val -> println val }
          
          stages = ['a','b']
          
          def builders = [:]
          for (stage in stages) {
              builders[stage] = { fn(stage) }
          }
          
          parallel builders
          

          Based on https://jenkins.io/doc/pipeline/examples/#parallel-multiple-nodes

          I will see the following output:

          [Pipeline] [a] { (Branch: a)
          [Pipeline] [b] { (Branch: b)
          [Pipeline] [a] echo
          [a] b
          [Pipeline] [a] }
          [Pipeline] [b] echo
          [b] b
          [Pipeline] [b] }
          

          Is there a separate bug for this then?

          At present, I use the workaround of re-assigning anything in the stage object to a local variable before using them, which looked a lot like what was happening here.

          If I reintroduce the workaround from the documentation linked, I see the correct output of course.

          Phil McArdle added a comment - Apologies if there's a separate bug for this, but I thought this one would address it too. If I adapt the sample pipeline in the description like so: def fn = { val -> println val } stages = [ 'a' , 'b' ] def builders = [:] for (stage in stages) { builders[stage] = { fn(stage) } } parallel builders Based on https://jenkins.io/doc/pipeline/examples/#parallel-multiple-nodes I will see the following output: [Pipeline] [a] { (Branch: a) [Pipeline] [b] { (Branch: b) [Pipeline] [a] echo [a] b [Pipeline] [a] } [Pipeline] [b] echo [b] b [Pipeline] [b] } Is there a separate bug for this then? At present, I use the workaround of re-assigning anything in the stage object to a local variable before using them, which looked a lot like what was happening here. If I reintroduce the workaround from the documentation linked, I see the correct output of course.

          Phil Grayson added a comment -

          philmcardlecg

          What you described is not related to this bug. See Jesses' comment from 2017-02-22 22:07.

          See also http://www.teknically-speaking.com/2013/01/closures-in-loops-javascript-gotchas.html

          Phil Grayson added a comment - philmcardlecg What you described is not related to this bug. See Jesses' comment from 2017-02-22 22:07. See also  http://www.teknically-speaking.com/2013/01/closures-in-loops-javascript-gotchas.html

          Phil McArdle added a comment -

          Ah, I hadn't seen that comment. Didn't understand this was a general Groovy issue. Thanks muchly

          Phil McArdle added a comment - Ah, I hadn't seen that comment. Didn't understand this was a general Groovy issue. Thanks muchly

          Phil McArdle added a comment -

          On the off-chance anyone else has my use case, philgrayson's comment above helped me remember why I wasn't using an iterator, and than I realised that JENKINS-26481 had been fixed in the preceding version of the same plugin, so I'm able to use .each in my example code again and get the desired results (and also in my actual pipeline code).

          Phil McArdle added a comment - On the off-chance anyone else has my use case, philgrayson 's comment above helped me remember why I wasn't using an iterator, and than I realised that JENKINS-26481  had been fixed in the preceding version of the same plugin, so I'm able to use .each in my example code again and get the desired results (and also in my actual pipeline code).

          Jesse Glick added a comment -

          Right, whereas previously you needed to do

          def builders = [:]
          for (stage in ['a','b']) {
              def _stage = stage
              builders[stage] = {echo _stage}
          }
          parallel builders
          

          now you can do

          def builders = [:]
          ['a','b'].each {stage -> builders[stage] = {echo stage}}
          parallel builders
          

          Actually you can simplify a bit more

          parallel(['a','b'].collectEntries {stage -> [stage, {echo stage}]})
          

          though this currently throws up a wall of signature approval requests, some of which are actually internal calls you should not be dealing with; abayer is working on a fix for that.

          Jesse Glick added a comment - Right, whereas previously you needed to do def builders = [:] for (stage in [ 'a' , 'b' ]) { def _stage = stage builders[stage] = {echo _stage} } parallel builders now you can do def builders = [:] [ 'a' , 'b' ].each {stage -> builders[stage] = {echo stage}} parallel builders Actually you can simplify a bit more parallel([ 'a' , 'b' ].collectEntries {stage -> [stage, {echo stage}]}) though this currently throws up a wall of signature approval requests, some of which are actually internal calls you should not be dealing with; abayer is working on a fix for that.

          Kevin Phillips added a comment - - edited

          Hmmm - I'm not sure if my case is exactly the same as this one, but it appears to me that this bug - or one similar to it - still exists. I'm running Jenkins v2.148 and Pipeline plugin v2.6 and can still break this parallel variable expansion problem, without using closures. Here's an example:

           

           

          parallel (
              "thread1": {
                  node() {
                      def a = git(branch: 'BranchA', credentialsId: 'MyCreds', url: 'url/to/git/repo.git')
                      def b = "1"
                      def c = sh(returnStdout: true, script: 'echo 1').trim() 
                      
                      echo "git commit " + a.GIT_COMMIT
                      echo "b is " + b
                      echo "c is " + c
                  }
              },    
              "thread2": {
                  node() {
                      def a = git(branch: 'BranchB', credentialsId: 'MyCreds', url: 'url/to/git/repo.git')
                      def b = "2"
                      def c = sh(returnStdout: true, script: 'echo 2').trim()
                      
                      echo "git commit " + a.GIT_COMMIT
                      echo "b is " + b
                      echo "c is " + c
                  }
              }
          )
          

          In this example, I would expect the output to be something like

          [thread1] git commit <BranchA_Hash>
          [thread1] b is 1
          [thread1] c is 1
          [thread2] git commit <BranchB_Hash>
          [thread2] b is 2
          [thread2] c is 2

          What I actually get is this:

          [thread1] git commit <BranchA_Hash>
          [thread1] b is 1
          [thread1] c is 1
          [thread2] git commit <BranchA_Hash>
          [thread2] b is 2
          [thread2] c is 2

          While the 'b' and 'c' variables seem to get unique values for each of the parallel threads, the 'a' value becomes ambiguous, always returning a single Git commit hash.

          Further, I've confirmed that which Git commit hash that is shown represents the one that is executed "first" in the parallel stages. So, for example, if you put a small "sleep 5" at the top of thread1, then you get the Git commit has from BranchB in both echo statements, and if you move the sleep statement to the top of thread2 you get the has from BranchA.

          Also interesting is that although the 'a' variable seems to get confused based on the thread execution, the other variables do not. I thought at first that maybe assigning a static value to the variable may have explained the discrepancy like in the case of the 'b' variable so I added a dynamically executed shell step to the mix for variable 'c' and I get the same results as b... so I'm not sure how / why / what is going on here.

          Kevin Phillips added a comment - - edited Hmmm - I'm not sure if my case is exactly the same as this one, but it appears to me that this bug - or one similar to it - still exists. I'm running Jenkins v2.148 and Pipeline plugin v2.6 and can still break this parallel variable expansion problem, without using closures. Here's an example:     parallel ( "thread1" : { node() { def a = git(branch: 'BranchA' , credentialsId: 'MyCreds' , url: 'url/to/git/repo.git' ) def b = "1" def c = sh(returnStdout: true , script: 'echo 1' ).trim() echo "git commit " + a.GIT_COMMIT echo "b is " + b echo "c is " + c } }, "thread2" : { node() { def a = git(branch: 'BranchB' , credentialsId: 'MyCreds' , url: 'url/to/git/repo.git' ) def b = "2" def c = sh(returnStdout: true , script: 'echo 2' ).trim() echo "git commit " + a.GIT_COMMIT echo "b is " + b echo "c is " + c } } ) In this example, I would expect the output to be something like [thread1] git commit <BranchA_Hash> [thread1] b is 1 [thread1] c is 1 [thread2] git commit <BranchB_Hash> [thread2] b is 2 [thread2] c is 2 What I actually get is this: [thread1] git commit <BranchA_Hash> [thread1] b is 1 [thread1] c is 1 [thread2] git commit <BranchA_Hash> [thread2] b is 2 [thread2] c is 2 While the 'b' and 'c' variables seem to get unique values for each of the parallel threads, the 'a' value becomes ambiguous, always returning a single Git commit hash. Further, I've confirmed that which Git commit hash that is shown represents the one that is executed "first" in the parallel stages. So, for example, if you put a small "sleep 5" at the top of thread1, then you get the Git commit has from BranchB in both echo statements, and if you move the sleep statement to the top of thread2 you get the has from BranchA. Also interesting is that although the 'a' variable seems to get confused based on the thread execution, the other variables do not. I thought at first that maybe assigning a static value to the variable may have explained the discrepancy like in the case of the 'b' variable so I added a dynamically executed shell step to the mix for variable 'c' and I get the same results as b... so I'm not sure how / why / what is going on here.

          Jacob Keller added a comment -

          so you're using the git step, and expect its output to return something unique? Is it possible that after that runs once it always returns the first branch? I suspect maybe that step isn't actually returning the value you think it does?

          Hmm, but you said if you rename the variable it works fine?? that's odd.

          Jacob Keller added a comment - so you're using the git step, and expect its output to return something unique? Is it possible that after that runs once it always returns the first branch? I suspect maybe that step isn't actually returning the value you think it does? Hmm, but you said if you rename the variable it works fine?? that's odd.

          So I guess the problem as described in my example above seems to be particular to the `git` and `checkout` build steps, however I have managed to reproduce the problem using the `sh` step as well - just in a more elaborate way. I haven't reduced that one to a minimal reproducible example yet so I left that part out of my reply.

          For now the only easy-to-reproduce example I have is with those 2 specific build steps. Also, the "trick" of renaming the variable names doesn't seem to work with either of those two build steps as I had originally thought. It does, however, seem to work around the problem wrt the `sh` build step that I have also encountered in some of my production builds. It's not a "fix" of course but it has temporarily worked around the problem for me in at least one use case. I have others I need to investigate further to see if / how they relate. Will post back as soon as I know more.

          Regardless, the example script I provided above should reproduce the problem in that one context at least.

          Kevin Phillips added a comment - So I guess the problem as described in my example above seems to be particular to the `git` and `checkout` build steps, however I have managed to reproduce the problem using the `sh` step as well - just in a more elaborate way. I haven't reduced that one to a minimal reproducible example yet so I left that part out of my reply. For now the only easy-to-reproduce example I have is with those 2 specific build steps. Also, the "trick" of renaming the variable names doesn't seem to work with either of those two build steps as I had originally thought. It does, however, seem to work around the problem wrt the `sh` build step that I have also encountered in some of my production builds. It's not a "fix" of course but it has temporarily worked around the problem for me in at least one use case. I have others I need to investigate further to see if / how they relate. Will post back as soon as I know more. Regardless, the example script I provided above should reproduce the problem in that one context at least.

          Jesse Glick added a comment -

          leedega whatever you are seeing is an unrelated issue. Please do not discuss it here. Rather, file a fresh issue with steps to reproduce from scratch in workflow-scm-step-plugin.

          Jesse Glick added a comment - leedega whatever you are seeing is an unrelated issue. Please do not discuss it here. Rather, file a fresh issue with steps to reproduce from scratch in workflow-scm-step-plugin .

            abayer Andrew Bayer
            philgrayson Phil Grayson
            Votes:
            16 Vote for this issue
            Watchers:
            28 Start watching this issue

              Created:
              Updated:
              Resolved: