Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-38268

Parallel step and closure scope

    XMLWordPrintable

    Details

    • Type: Bug
    • Status: Resolved (View Workflow)
    • Priority: Critical
    • Resolution: Fixed
    • Component/s: workflow-cps-plugin
    • Labels:
    • Environment:
    • Similar Issues:

      Description

      I'm experiencing some odd behaviour with the parallel step related to variable scoping. The following minimal pipeline script demonstrates my problem.

      def fn = { val -> println val }
      
      parallel([
        a: { fn('a') },
        b: { fn('b') }
      ])
      

      Expected output

      a
      b
      

      (or b then a, order of execution should be undefined)

      Actual output

      b
      b
      

        Attachments

        1. console_output_no_node_step.png
          console_output_no_node_step.png
          56 kB
        2. console_output.png
          console_output.png
          137 kB
        3. pipeline_step.png
          pipeline_step.png
          222 kB
        4. pipeline_steps.png
          pipeline_steps.png
          137 kB

          Issue Links

            Activity

            Hide
            jekeller Jacob Keller added a comment - - edited

            I have this issue as well. I have a block that takes a closure and sets up the parallel branches and wants to run the closure once for each node, but the parameter to the closure always takes only the last value when inside the node.

            Show
            jekeller Jacob Keller added a comment - - edited I have this issue as well. I have a block that takes a closure and sets up the parallel branches and wants to run the closure once for each node, but the parameter to the closure always takes only the last value when inside the node.
            Hide
            jglick Jesse Glick added a comment -

            Possibly a duplicate etc.

            Show
            jglick Jesse Glick added a comment - Possibly a duplicate etc.
            Hide
            jfemia James Femia added a comment -

            Same issue here in 2.19.1 LTS / Pipeline 2.4

            Thought cloning the closure might be a workaround but it had no effect. Now trying to work around it with more copy and paste of code that would have been in the closure, instead.

            Show
            jfemia James Femia added a comment - Same issue here in 2.19.1 LTS / Pipeline 2.4 Thought cloning the closure might be a workaround but it had no effect. Now trying to work around it with more copy and paste of code that would have been in the closure, instead.
            Hide
            abayer Andrew Bayer added a comment -

            Yet another of the JENKINS-26481 symptoms...

            Show
            abayer Andrew Bayer added a comment - Yet another of the JENKINS-26481 symptoms...
            Hide
            jglick Jesse Glick added a comment -

            I doubt this is a duplicate of JENKINS-26481. call in this case should refer to the implicit method defined in CPS-transformed sources, not a binary method. And it is the receiver, not an argument, which is a closure.

            Show
            jglick Jesse Glick added a comment - I doubt this is a duplicate of JENKINS-26481 . call in this case should refer to the implicit method defined in CPS-transformed sources, not a binary method. And it is the receiver , not an argument , which is a closure.
            Hide
            aleol57 Alexander Olofsson added a comment -

            Been able to replicate this consistently without having to add a node block, in fact I still haven't found a way to not replicate it.

            Composited closures also seem to be even more unhappy about being parallelized.

            Script:

            def heavyweight_task = { fn->
                println "Doing heavy work on ${fn}..."
            }
            
            def work = [
                file10: heavyweight_task.curry('file10'),
                file11: heavyweight_task.curry('file11'),
                file20: ({'file20'} >> heavyweight_task),
                file21: ({'file21'} >> heavyweight_task),
                file30: { heavyweight_task.call('file30') },
                file31: { heavyweight_task.call('file31') },
                file40: heavyweight_task.clone().curry('file40'),
                file41: heavyweight_task.clone().curry('file41'),
                file50: heavyweight_task.dehydrate().curry('file50'),
                file51: heavyweight_task.dehydrate().curry('file51'),
            ]
            
            parallel(work)
            

            output:

            [Pipeline] parallel
            [Pipeline] [file10] { (Branch: file10)
            [Pipeline] [file11] { (Branch: file11)
            [Pipeline] [file20] { (Branch: file20)
            [Pipeline] [file21] { (Branch: file21)
            [Pipeline] [file30] { (Branch: file30)
            [Pipeline] [file31] { (Branch: file31)
            [Pipeline] [file40] { (Branch: file40)
            [Pipeline] [file41] { (Branch: file41)
            [Pipeline] [file50] { (Branch: file50)
            [Pipeline] [file51] { (Branch: file51)
            [Pipeline] [file10] echo
            [file10] Doing heavy work on file51...
            [Pipeline] [file10] }
            [Pipeline] [file11] echo
            [file11] Doing heavy work on file51...
            [Pipeline] [file11] }
            [Pipeline] [file20] }
            [Pipeline] [file21] }
            [Pipeline] [file40] echo
            [file40] Doing heavy work on file31...
            [Pipeline] [file40] }
            [Pipeline] [file41] echo
            [file41] Doing heavy work on file31...
            [Pipeline] [file41] }
            [Pipeline] [file50] echo
            [file50] Doing heavy work on file31...
            [Pipeline] [file50] }
            [Pipeline] [file51] echo
            [file51] Doing heavy work on file31...
            [Pipeline] [file51] }
            [Pipeline] [file30] echo
            [file30] Doing heavy work on file31...
            [Pipeline] [file30] }
            [Pipeline] [file31] echo
            [file31] Doing heavy work on file31...
            [Pipeline] [file31] }
            [Pipeline] // parallel
            [Pipeline] End of Pipeline
            
            Show
            aleol57 Alexander Olofsson added a comment - Been able to replicate this consistently without having to add a node block, in fact I still haven't found a way to not replicate it. Composited closures also seem to be even more unhappy about being parallelized. Script: def heavyweight_task = { fn-> println "Doing heavy work on ${fn}..." } def work = [ file10: heavyweight_task.curry( 'file10' ), file11: heavyweight_task.curry( 'file11' ), file20: ({ 'file20' } >> heavyweight_task), file21: ({ 'file21' } >> heavyweight_task), file30: { heavyweight_task.call( 'file30' ) }, file31: { heavyweight_task.call( 'file31' ) }, file40: heavyweight_task.clone().curry( 'file40' ), file41: heavyweight_task.clone().curry( 'file41' ), file50: heavyweight_task.dehydrate().curry( 'file50' ), file51: heavyweight_task.dehydrate().curry( 'file51' ), ] parallel(work) output: [Pipeline] parallel [Pipeline] [file10] { (Branch: file10) [Pipeline] [file11] { (Branch: file11) [Pipeline] [file20] { (Branch: file20) [Pipeline] [file21] { (Branch: file21) [Pipeline] [file30] { (Branch: file30) [Pipeline] [file31] { (Branch: file31) [Pipeline] [file40] { (Branch: file40) [Pipeline] [file41] { (Branch: file41) [Pipeline] [file50] { (Branch: file50) [Pipeline] [file51] { (Branch: file51) [Pipeline] [file10] echo [file10] Doing heavy work on file51... [Pipeline] [file10] } [Pipeline] [file11] echo [file11] Doing heavy work on file51... [Pipeline] [file11] } [Pipeline] [file20] } [Pipeline] [file21] } [Pipeline] [file40] echo [file40] Doing heavy work on file31... [Pipeline] [file40] } [Pipeline] [file41] echo [file41] Doing heavy work on file31... [Pipeline] [file41] } [Pipeline] [file50] echo [file50] Doing heavy work on file31... [Pipeline] [file50] } [Pipeline] [file51] echo [file51] Doing heavy work on file31... [Pipeline] [file51] } [Pipeline] [file30] echo [file30] Doing heavy work on file31... [Pipeline] [file30] } [Pipeline] [file31] echo [file31] Doing heavy work on file31... [Pipeline] [file31] } [Pipeline] // parallel [Pipeline] End of Pipeline
            Hide
            jglick Jesse Glick added a comment -

            Well that is more likely an issue with curry; not sure if that was ever implemented.

            Show
            jglick Jesse Glick added a comment - Well that is more likely an issue with curry ; not sure if that was ever implemented.
            Hide
            aleol57 Alexander Olofsson added a comment -

            To be honest, currying was just a good way to showcase the issue. Anything that embeds a value into the closure seems to be lost in parallelisation, so even wrapping the closure into another one that feeds it the value as a static parameter doesn't work. (file30 and file31 in the example)

            Ran into this when trying to create a list of parallel steps out of files from a folder, to spread out a series of fixed-time workloads onto many executors so that the build step wouldn't take as much time.
            Tried everything I could think of to work around the problem, not even rolling my own closure class to store the value worked.

            Show
            aleol57 Alexander Olofsson added a comment - To be honest, currying was just a good way to showcase the issue. Anything that embeds a value into the closure seems to be lost in parallelisation, so even wrapping the closure into another one that feeds it the value as a static parameter doesn't work. ( file30 and file31 in the example) Ran into this when trying to create a list of parallel steps out of files from a folder, to spread out a series of fixed-time workloads onto many executors so that the build step wouldn't take as much time. Tried everything I could think of to work around the problem, not even rolling my own closure class to store the value worked.
            Hide
            jglick Jesse Glick added a comment -

            No idea offhand, will have to reproduce in a functional test and study in a debugger.

            Show
            jglick Jesse Glick added a comment - No idea offhand, will have to reproduce in a functional test and study in a debugger.
            Hide
            akovi Andras Kovi added a comment - - edited

            Seems like the names of the parameters of the called closures play some role. For example:

            def finallyHandler(param, closure) {
                echo "HANDLER:param=$param"
                closure(param, param)
            }
            
            work = [
              "1": { finallyHandler(1) { p, p1 -> echo "p=$p p1=$p1" } },
              "2": { finallyHandler(2) { p, p2 -> echo "p=$p p2=$p2" } },
              "3": { finallyHandler(3) { p, p3 -> echo "p=$p p3=$p3" } },
            ]
            
            parallel work
            

            Output: (we want to see the same value for p= and pX=)

            [Pipeline] parallel
            [Pipeline] [1] { (Branch: 1)
            [Pipeline] [2] { (Branch: 2)
            [Pipeline] [3] { (Branch: 3)
            [Pipeline] [1] echo
            "[1] HANDLER:param=1"
            [Pipeline] [2] echo
            "[2] HANDLER:param=2"
            [Pipeline] [3] echo
            "[3] HANDLER:param=3"
            [Pipeline] [1] echo
            "[1] p=3 p1=1"
            [Pipeline] [1] }
            [Pipeline] [2] echo
            "[2] p=3 p2=2"
            [Pipeline] [2] }
            [Pipeline] [3] echo
            "[3] p=3 p3=3"
            [Pipeline] [3] }
            [Pipeline] // parallel
            [Pipeline] End of Pipeline
            Finished: SUCCESS
            
            Show
            akovi Andras Kovi added a comment - - edited Seems like the names of the parameters of the called closures play some role. For example: def finallyHandler(param, closure) { echo "HANDLER:param=$param" closure(param, param) } work = [ "1" : { finallyHandler(1) { p, p1 -> echo "p=$p p1=$p1" } }, "2" : { finallyHandler(2) { p, p2 -> echo "p=$p p2=$p2" } }, "3" : { finallyHandler(3) { p, p3 -> echo "p=$p p3=$p3" } }, ] parallel work Output: (we want to see the same value for p= and pX=) [Pipeline] parallel [Pipeline] [1] { (Branch: 1) [Pipeline] [2] { (Branch: 2) [Pipeline] [3] { (Branch: 3) [Pipeline] [1] echo "[1] HANDLER:param=1" [Pipeline] [2] echo "[2] HANDLER:param=2" [Pipeline] [3] echo "[3] HANDLER:param=3" [Pipeline] [1] echo "[1] p=3 p1=1" [Pipeline] [1] } [Pipeline] [2] echo "[2] p=3 p2=2" [Pipeline] [2] } [Pipeline] [3] echo "[3] p=3 p3=3" [Pipeline] [3] } [Pipeline] // parallel [Pipeline] End of Pipeline Finished: SUCCESS
            Hide
            jglick Jesse Glick added a comment -

            I wonder if ClosureCallEnv is misbehaving. Need to go through this in a debugger.

            Show
            jglick Jesse Glick added a comment - I wonder if ClosureCallEnv is misbehaving. Need to go through this in a debugger.
            Hide
            externl Joe George added a comment -

            We experience this too, it's quite annoying and hard to work around.

            Show
            externl Joe George added a comment - We experience this too, it's quite annoying and hard to work around.
            Hide
            jlpinardon jlpinardon added a comment -

            Same problem when trying to loop over a map of git repositories :
            def scmRepo = [
            'jobRepo': [
            'url': "....."
            'branch': "origin/master",
            'credentialsId': '.....',
            'targetDir': 'src'
            ],
            'envRepo': [
            'url': "....."
            'branch': "origin/master",
            'credentialsId': '.....',
            'targetDir': 'src'
            ],
            'srcRepo': [
            'url': "....."
            'branch': "origin/master",
            'credentialsId': '.....',
            'targetDir': 'src'
            ]
            ]
            @NonCPS
            def runCheckOut(scmRepo) {
            def branches = [: ]
            scmRepo.each() {
            repoName,
            repoRef - >
            branches["${repoName}"] = {
            node {
            println "\nCheckout content from ${repoRef.url}"
            //your logic there
            }
            }
            }
            parallel branches
            }

            When using runCheckOut, it runs but never ends. No error message is displayed.

            Show
            jlpinardon jlpinardon added a comment - Same problem when trying to loop over a map of git repositories : def scmRepo = [ 'jobRepo': [ 'url': "....." 'branch': "origin/master", 'credentialsId': '.....', 'targetDir': 'src' ], 'envRepo': [ 'url': "....." 'branch': "origin/master", 'credentialsId': '.....', 'targetDir': 'src' ], 'srcRepo': [ 'url': "....." 'branch': "origin/master", 'credentialsId': '.....', 'targetDir': 'src' ] ] @NonCPS def runCheckOut(scmRepo) { def branches = [: ] scmRepo.each() { repoName, repoRef - > branches ["${repoName}"] = { node { println "\nCheckout content from ${repoRef.url}" //your logic there } } } parallel branches } When using runCheckOut, it runs but never ends. No error message is displayed.
            Hide
            nvgoldin Nadav Goldin added a comment -

            Hi, I'm experiencing what I think is a symptom of the same issue, while trying to trigger parallel builds, take this pipeline job, assuming the dummy jobs 'job-1,2,3' exists:

            @NonCPS
            def get_dummy_params(val)
            {
                return [string(name: 'dummy', value: "$val")]
            }
            
            @NonCPS
            def create_jobs()
            {
                def jobs = [:]
                (1..3).each { jobs["job-$it"] = { -> build([job: "job-$it", parameters: get_dummy_params(it) ]) } }
                return jobs
            }
            
            stage ('triggering') {
                parallel(create_jobs())
            }
            

            I would expect it to trigger the 3 jobs in parallel, instead it 'nests' under 'job-1' branch all 3 jobs, and creates empty branches for 'job-2' and 'job-3', causing the job to never end. Console output:

            Started by user admin
            [Pipeline] stage
            [Pipeline] { (triggering)
            [Pipeline] parallel
            [Pipeline] [job-1] { (Branch: job-1)
            [Pipeline] [job-1] build (Building job-1)
            [job-1] Scheduling project: job-1
            [Pipeline] [job-2] { (Branch: job-2)
            [Pipeline] [job-1] build (Building job-2)
            [job-1] Scheduling project: job-2
            [Pipeline] [job-3] { (Branch: job-3)
            [Pipeline] [job-1] build (Building job-3)
            [job-1] Scheduling project: job-3 !pipeline_steps.png|thumbnail! 
            [job-1] Starting building: job-3 #1
            ...
            

            I found only 2 workarounds:
            1. Don't use any functions to generate data fed into 'parallel' function(i.e. create everything under the 'stage' step).
            2. Feeding a newly created map to parallel and redefining the groovy closures with a '.call()' (frankly I have no idea how I came up with this).
            I.e. replace the 'stage' in the previous example with:

            stage ('triggering') {
                def jobs = create_jobs()
                parallel([
                    'job-1': { jobs['job-1'].call() },
                    'job-2': { jobs['job-2'].call() },
                    'job-3': { jobs['job-3'].call() },
                    ])
            }
            

            Output as expected:

            Started by user admin
            [Pipeline] stage
            [Pipeline] { (triggering)
            [Pipeline] parallel
            [Pipeline] [job-1] { (Branch: job-1)
            [Pipeline] [job-2] { (Branch: job-2)
            [Pipeline] [job-3] { (Branch: job-3)
            [Pipeline] [job-1] build (Building job-1)
            [job-1] Scheduling project: job-1
            [Pipeline] [job-2] build (Building job-2)
            [job-2] Scheduling project: job-2
            [Pipeline] [job-3] build (Building job-3)
            [job-3] Scheduling project: job-3
            [job-1] Starting building: job-1 #2
            [job-2] Starting building: job-2 #2
            [Pipeline] [job-2] }
            [job-3] Starting building: job-3 #2
            [Pipeline] [job-1] }
            [Pipeline] [job-3] }
            [Pipeline] // parallel
            [Pipeline] }
            [Pipeline] // stage
            [Pipeline] End of Pipeline
            Finished: SUCCESS
            

            Though both workarounds don't allow code to be reused, as the list of jobs has to be generated inside the 'stage'.
            Tested on Jenkins ver. 2.32.2, using the docker image.

            Show
            nvgoldin Nadav Goldin added a comment - Hi, I'm experiencing what I think is a symptom of the same issue, while trying to trigger parallel builds, take this pipeline job, assuming the dummy jobs 'job-1,2,3' exists: @NonCPS def get_dummy_params(val) { return [string(name: 'dummy' , value: "$val" )] } @NonCPS def create_jobs() { def jobs = [:] (1..3).each { jobs[ "job-$it" ] = { -> build([job: "job-$it" , parameters: get_dummy_params(it) ]) } } return jobs } stage ( 'triggering' ) { parallel(create_jobs()) } I would expect it to trigger the 3 jobs in parallel, instead it 'nests' under 'job-1' branch all 3 jobs, and creates empty branches for 'job-2' and 'job-3', causing the job to never end. Console output: Started by user admin [Pipeline] stage [Pipeline] { (triggering) [Pipeline] parallel [Pipeline] [job-1] { (Branch: job-1) [Pipeline] [job-1] build (Building job-1) [job-1] Scheduling project: job-1 [Pipeline] [job-2] { (Branch: job-2) [Pipeline] [job-1] build (Building job-2) [job-1] Scheduling project: job-2 [Pipeline] [job-3] { (Branch: job-3) [Pipeline] [job-1] build (Building job-3) [job-1] Scheduling project: job-3 !pipeline_steps.png|thumbnail! [job-1] Starting building: job-3 #1 ... I found only 2 workarounds: 1. Don't use any functions to generate data fed into 'parallel' function(i.e. create everything under the 'stage' step). 2. Feeding a newly created map to parallel and redefining the groovy closures with a '.call()' (frankly I have no idea how I came up with this). I.e. replace the 'stage' in the previous example with: stage ( 'triggering' ) { def jobs = create_jobs() parallel([ 'job-1' : { jobs[ 'job-1' ].call() }, 'job-2' : { jobs[ 'job-2' ].call() }, 'job-3' : { jobs[ 'job-3' ].call() }, ]) } Output as expected: Started by user admin [Pipeline] stage [Pipeline] { (triggering) [Pipeline] parallel [Pipeline] [job-1] { (Branch: job-1) [Pipeline] [job-2] { (Branch: job-2) [Pipeline] [job-3] { (Branch: job-3) [Pipeline] [job-1] build (Building job-1) [job-1] Scheduling project: job-1 [Pipeline] [job-2] build (Building job-2) [job-2] Scheduling project: job-2 [Pipeline] [job-3] build (Building job-3) [job-3] Scheduling project: job-3 [job-1] Starting building: job-1 #2 [job-2] Starting building: job-2 #2 [Pipeline] [job-2] } [job-3] Starting building: job-3 #2 [Pipeline] [job-1] } [Pipeline] [job-3] } [Pipeline] // parallel [Pipeline] } [Pipeline] // stage [Pipeline] End of Pipeline Finished: SUCCESS Though both workarounds don't allow code to be reused, as the list of jobs has to be generated inside the 'stage'. Tested on Jenkins ver. 2.32.2, using the docker image.
            Hide
            jglick Jesse Glick added a comment -

            Nadav Goldin create_jobs is wrong. You may not call steps like build from inside a method marked @NonCPS.

            Show
            jglick Jesse Glick added a comment - Nadav Goldin create_jobs is wrong. You may not call steps like build from inside a method marked @NonCPS .
            Hide
            nvgoldin Nadav Goldin added a comment -

            Jesse Glick, thanks! I changed the code to simple for loops without iterators , and without '@NonCPS' and it seems to work . A small gotcha I encountered is that the looping variable needs to be aliased, otherwise only the last variable will be sent to all closures(unlike when using iterators, I think). For reference, here is the code that does work:

            def get_dummy_params(val)
            {
                return [string(name: 'dummy', value: "$val")]
            }
            
            def create_jobs()
            {
                def jobs = [:]
                for (int i=1; i <= 3; i++) {
                    def x = i
                    jobs["job-$x"] = { -> build([job: "job-$x", parameters: get_dummy_params(x) ]) }
                }
                return jobs
            }
            
            stage ('triggering') {
                parallel(create_jobs())
            }
            
            Show
            nvgoldin Nadav Goldin added a comment - Jesse Glick , thanks! I changed the code to simple for loops without iterators , and without '@NonCPS' and it seems to work . A small gotcha I encountered is that the looping variable needs to be aliased, otherwise only the last variable will be sent to all closures(unlike when using iterators, I think). For reference, here is the code that does work: def get_dummy_params(val) { return [string(name: 'dummy' , value: "$val" )] } def create_jobs() { def jobs = [:] for ( int i=1; i <= 3; i++) { def x = i jobs[ "job-$x" ] = { -> build([job: "job-$x" , parameters: get_dummy_params(x) ]) } } return jobs } stage ( 'triggering' ) { parallel(create_jobs()) }
            Hide
            jglick Jesse Glick added a comment -

            the looping variable needs to be aliased, otherwise only the last variable will be sent to all closures

            Yes this is just a general aspect of Groovy.

            $ groovy -e 'def closures = []; for (int i = 0; i < 10; i++) {closures += { -> println i}}; closures.each {it()}'
            10
            10
            10
            10
            10
            10
            10
            10
            10
            10
            
            Show
            jglick Jesse Glick added a comment - the looping variable needs to be aliased, otherwise only the last variable will be sent to all closures Yes this is just a general aspect of Groovy. $ groovy -e 'def closures = []; for (int i = 0; i < 10; i++) {closures += { -> println i}}; closures.each {it()}' 10 10 10 10 10 10 10 10 10 10
            Hide
            tknerr Torben Knerr added a comment - - edited

            I believe Nadav Goldin's example is not representative for this issue, since `get_dummy_params` is a method. If it were closure instead (which is totally not needed in this case, so it would be quite artificial) it would fail again, I suspect.

            Here is another example which suffers from the issue, and I have totally no clue how to work around it:

            def onEachSlave(doStuff) {
              def doStuffClosures = [:]
              for (slave in ['slavelnx1', 'slavelnx2', 'slavelnx3']) {
                def s = slave
                doStuffClosures[s] = { echo "running on ${s}..."; doStuff(s); echo "...done on ${s}!" }
              }
              return doStuffClosures
            }
            
            parallel(onEachSlave { slave ->
                echo "doing stuff on ${slave}..."
            })
            

            The output is:

            [Pipeline] parallel
            [Pipeline] [slavelnx1] { (Branch: slavelnx1)
            [Pipeline] [slavelnx2] { (Branch: slavelnx2)
            [Pipeline] [slavelnx3] { (Branch: slavelnx3)
            [Pipeline] [slavelnx1] echo
            [slavelnx1] running on slavelnx1...
            [Pipeline] [slavelnx2] echo
            [slavelnx2] running on slavelnx2...
            [Pipeline] [slavelnx3] echo
            [slavelnx3] running on slavelnx3...
            [Pipeline] [slavelnx1] echo
            [slavelnx1] doing stuff on slavelnx3
            [Pipeline] [slavelnx1] echo
            [slavelnx1] ...done on slavelnx1!
            [Pipeline] [slavelnx1] }
            [Pipeline] [slavelnx2] echo
            [slavelnx2] doing stuff on slavelnx3
            [Pipeline] [slavelnx2] echo
            [slavelnx2] ...done on slavelnx2!
            [Pipeline] [slavelnx2] }
            [Pipeline] [slavelnx3] echo
            [slavelnx3] doing stuff on slavelnx3
            [Pipeline] [slavelnx3] echo
            [slavelnx3] ...done on slavelnx3!
            [Pipeline] [slavelnx3] }
            [Pipeline] // parallel
            [Pipeline] End of Pipeline
            

            This is Jenkins 2.32.1 and workflow-cps 2.23

            Jesse Glick can you confirm this is the issue, or is it just my missing groovy foo?

            Would also be happy to hear if someone can come up with a workaround for this

            Show
            tknerr Torben Knerr added a comment - - edited I believe Nadav Goldin 's example is not representative for this issue, since `get_dummy_params` is a method. If it were closure instead (which is totally not needed in this case, so it would be quite artificial) it would fail again, I suspect. Here is another example which suffers from the issue, and I have totally no clue how to work around it: def onEachSlave(doStuff) { def doStuffClosures = [:] for (slave in [ 'slavelnx1' , 'slavelnx2' , 'slavelnx3' ]) { def s = slave doStuffClosures[s] = { echo "running on ${s}..." ; doStuff(s); echo "...done on ${s}!" } } return doStuffClosures } parallel(onEachSlave { slave -> echo "doing stuff on ${slave}..." }) The output is: [Pipeline] parallel [Pipeline] [slavelnx1] { (Branch: slavelnx1) [Pipeline] [slavelnx2] { (Branch: slavelnx2) [Pipeline] [slavelnx3] { (Branch: slavelnx3) [Pipeline] [slavelnx1] echo [slavelnx1] running on slavelnx1... [Pipeline] [slavelnx2] echo [slavelnx2] running on slavelnx2... [Pipeline] [slavelnx3] echo [slavelnx3] running on slavelnx3... [Pipeline] [slavelnx1] echo [slavelnx1] doing stuff on slavelnx3 [Pipeline] [slavelnx1] echo [slavelnx1] ...done on slavelnx1! [Pipeline] [slavelnx1] } [Pipeline] [slavelnx2] echo [slavelnx2] doing stuff on slavelnx3 [Pipeline] [slavelnx2] echo [slavelnx2] ...done on slavelnx2! [Pipeline] [slavelnx2] } [Pipeline] [slavelnx3] echo [slavelnx3] doing stuff on slavelnx3 [Pipeline] [slavelnx3] echo [slavelnx3] ...done on slavelnx3! [Pipeline] [slavelnx3] } [Pipeline] // parallel [Pipeline] End of Pipeline This is Jenkins 2.32.1 and workflow-cps 2.23 Jesse Glick can you confirm this is the issue, or is it just my missing groovy foo? Would also be happy to hear if someone can come up with a workaround for this
            Hide
            jglick Jesse Glick added a comment -

            Most likely a bug in the depths of groovy-cps. Probably easy enough to reproduce in a unit test there. After that, I have no idea offhand how difficult the fix would be.

            Show
            jglick Jesse Glick added a comment - Most likely a bug in the depths of groovy-cps . Probably easy enough to reproduce in a unit test there. After that, I have no idea offhand how difficult the fix would be.
            Hide
            jelion John Elion added a comment - - edited

            I've been hit by the same issue.  I've noticed that certain kinds of scoping passes through ok, but some kinds do not.

                def say1(s) { println('say1: ' + s) }
                def say2 = { s -> println('say2: ' + s) }

                def setup = {
                  def map = [:]
                  def say3 = { s -> println('say3: ' + s) }
                  for (def i = 0; i < 3; i++) {
                    def x = I
                    map[x] = { println x; say1( x ); say2( x ); say3( x ) }
                  }
                  return map
                }

                def jobs = setup()
                parallel(jobs)

            The "println" and "say1" work but "say2" and "say3" are wrong, and in a different ways.  Stripping all the "noise", I get the following:

                [0] 0
                [1] 1
                [2] 2
                [0] say1: 0
                [1] say1: 1
                [2] say1: 2
                [0] say2: 2
                [1] say2: 0
                [2] say2: 1
                [0] say3: 2
                [1] say3: 2
                [2] say3: 2

            say2 is not printing the x value associated with the parallel branch.  say3 is always printing the x value from branch "2".  (I had originally commented that say2 was working, but it is not.  The branch name in brackets should always match the number at the end of each line.)

            Spotting these differences helped me work around the issue and also understanding what the scope resolution issue might be...

            Show
            jelion John Elion added a comment - - edited I've been hit by the same issue.  I've noticed that certain kinds of scoping passes through ok, but some kinds do not.     def say1(s) { println('say1: ' + s) }     def say2 = { s -> println('say2: ' + s) }     def setup = {       def map = [:]       def say3 = { s -> println('say3: ' + s) }       for (def i = 0; i < 3; i++) {         def x = I         map [x] = { println x; say1( x ); say2( x ); say3( x ) }       }       return map     }     def jobs = setup()     parallel(jobs) The "println" and "say1" work but "say2" and "say3" are wrong, and in a different ways.  Stripping all the "noise", I get the following:     [0] 0     [1] 1     [2] 2     [0] say1: 0     [1] say1: 1     [2] say1: 2     [0] say2: 2     [1] say2: 0     [2] say2: 1     [0] say3: 2     [1] say3: 2     [2] say3: 2 say2 is not printing the x value associated with the parallel branch.  say3 is always printing the x value from branch "2".  (I had originally commented that say2 was working, but it is not.  The branch name in brackets should always match the number at the end of each line.) Spotting these differences helped me work around the issue and also understanding what the scope resolution issue might be...
            Hide
            vallon Justin Vallon added a comment -

            Here is a simpler case only using a closure, parallel, and an assert.  You can remove the echos if you want:

            def fn = {
                arg ->
                echo "arg before $arg";
                arg.count = arg.count + 1;
                echo "arg after $arg";
            };
            
            def a = [ id: 'a', count : 0 ];
            def b = [ id: 'b', count : 0 ];
            
            parallel(
                StepA : { fn(a); },
                StepB : { fn(b); },
            );
            
            // expected:
            //   a == [ id: 'a', count : 1 ]
            //   b == [ id: 'b', count : 1 ]
            
            // actual:
            //   a == [ id: 'a', count : 0 ]
            //   b == [ id: 'b', count : 2 ]
            
            echo "a $a";
            echo "b $b";
            assert a.count == 1;
            assert b.count == 1;
            
            

            Actual output:

            [Pipeline] parallel
            [Pipeline] [StepA] { (Branch: StepA)
            [Pipeline] [StepB] { (Branch: StepB)
            [Pipeline] [StepA] echo
            [StepA] arg before [id:b, count:0]
            [Pipeline] [StepA] echo
            [StepA] arg after [id:b, count:1]
            [Pipeline] [StepA] }
            [Pipeline] [StepB] echo
            [StepB] arg before [id:b, count:1]
            [Pipeline] [StepB] echo
            [StepB] arg after [id:b, count:2]
            [Pipeline] [StepB] }
            [Pipeline] // parallel
            [Pipeline] echo
            a [id:a, count:0]
            [Pipeline] echo
            b [id:b, count:2]
            [Pipeline] End of Pipeline
            
            GitHub has been notified of this commit?s build result
            
            hudson.remoting.ProxyException: Assertion failed: 
            
            assert a.count == 1
            
            
            Show
            vallon Justin Vallon added a comment - Here is a simpler case only using a closure, parallel, and an assert.  You can remove the echos if you want: def fn = { arg -> echo "arg before $arg" ; arg.count = arg.count + 1; echo "arg after $arg" ; }; def a = [ id: 'a' , count : 0 ]; def b = [ id: 'b' , count : 0 ]; parallel( StepA : { fn(a); }, StepB : { fn(b); }, ); // expected: // a == [ id: 'a' , count : 1 ] // b == [ id: 'b' , count : 1 ] // actual: // a == [ id: 'a' , count : 0 ] // b == [ id: 'b' , count : 2 ] echo "a $a" ; echo "b $b" ; assert a.count == 1; assert b.count == 1; Actual output: [Pipeline] parallel [Pipeline] [StepA] { (Branch: StepA) [Pipeline] [StepB] { (Branch: StepB) [Pipeline] [StepA] echo [StepA] arg before [id:b, count:0] [Pipeline] [StepA] echo [StepA] arg after [id:b, count:1] [Pipeline] [StepA] } [Pipeline] [StepB] echo [StepB] arg before [id:b, count:1] [Pipeline] [StepB] echo [StepB] arg after [id:b, count:2] [Pipeline] [StepB] } [Pipeline] // parallel [Pipeline] echo a [id:a, count:0] [Pipeline] echo b [id:b, count:2] [Pipeline] End of Pipeline GitHub has been notified of this commit?s build result hudson.remoting.ProxyException: Assertion failed: assert a.count == 1
            Hide
            jglick Jesse Glick added a comment -

            Makes fix of JENKINS-26481 less usable in practice.

            Show
            jglick Jesse Glick added a comment - Makes fix of  JENKINS-26481  less usable in practice.
            Hide
            abayer Andrew Bayer added a comment -

            Jesse Glick So it's not just a problem with parallel? Based on that, I'd guess the problem specifically is in Closure getting passed around wrong?

            Show
            abayer Andrew Bayer added a comment - Jesse Glick So it's not just a problem with parallel ? Based on that, I'd guess the problem specifically is in Closure getting passed around wrong?
            Hide
            jglick Jesse Glick added a comment -

            Andrew Bayer steps to reproduce in JENKINS-44746 do not use parallel so I believe it is a general bug in groovy-cps.

            Show
            jglick Jesse Glick added a comment - Andrew Bayer steps to reproduce in  JENKINS-44746 do not use parallel so I believe it is a general bug in groovy-cps .
            Hide
            abayer Andrew Bayer added a comment -

            I've got an initial PR up at https://github.com/cloudbees/groovy-cps/pull/61 that seems to do the trick (i.e., I added a test based on the example given below and over in JENKINS-44746, which showed that parallel isn't needed to hit this bug, it failed, I made changes, it passed), but it needs Jesse Glick and Kohsuke Kawaguchi to weigh in.

            Show
            abayer Andrew Bayer added a comment - I've got an initial PR up at https://github.com/cloudbees/groovy-cps/pull/61 that seems to do the trick (i.e., I added a test based on the example given below and over in JENKINS-44746 , which showed that parallel isn't needed to hit this bug, it failed, I made changes, it passed), but it needs Jesse Glick and Kohsuke Kawaguchi to weigh in.
            Hide
            scm_issue_link SCM/JIRA link daemon added a comment -

            Code changed in jenkins
            User: Andrew Bayer
            Path:
            pom.xml
            src/test/java/org/jenkinsci/plugins/workflow/cps/steps/ParallelStepTest.java
            http://jenkins-ci.org/commit/workflow-cps-plugin/102b15ec641254602168f47aebe290669dfe8315
            Log:
            JENKINS-38268 Testing for lexical closure scope

            Downstream of https://github.com/cloudbees/groovy-cps/pull/61

            Show
            scm_issue_link SCM/JIRA link daemon added a comment - Code changed in jenkins User: Andrew Bayer Path: pom.xml src/test/java/org/jenkinsci/plugins/workflow/cps/steps/ParallelStepTest.java http://jenkins-ci.org/commit/workflow-cps-plugin/102b15ec641254602168f47aebe290669dfe8315 Log: JENKINS-38268 Testing for lexical closure scope Downstream of https://github.com/cloudbees/groovy-cps/pull/61
            Hide
            scm_issue_link SCM/JIRA link daemon added a comment -

            Code changed in jenkins
            User: Jesse Glick
            Path:
            pom.xml
            src/test/java/org/jenkinsci/plugins/workflow/cps/steps/ParallelStepTest.java
            http://jenkins-ci.org/commit/workflow-cps-plugin/874ae478ffd7f64d484240c18e8a1136567872a9
            Log:
            Merge pull request #143 from abayer/jenkins-38268

            JENKINS-38268 Testing for lexical closure scope

            Compare: https://github.com/jenkinsci/workflow-cps-plugin/compare/d897ac0e4605...874ae478ffd7

            Show
            scm_issue_link SCM/JIRA link daemon added a comment - Code changed in jenkins User: Jesse Glick Path: pom.xml src/test/java/org/jenkinsci/plugins/workflow/cps/steps/ParallelStepTest.java http://jenkins-ci.org/commit/workflow-cps-plugin/874ae478ffd7f64d484240c18e8a1136567872a9 Log: Merge pull request #143 from abayer/jenkins-38268 JENKINS-38268 Testing for lexical closure scope Compare: https://github.com/jenkinsci/workflow-cps-plugin/compare/d897ac0e4605...874ae478ffd7
            Hide
            jglick Jesse Glick added a comment -

            Released as 2.35.

            Show
            jglick Jesse Glick added a comment - Released as 2.35.
            Hide
            philgrayson Phil Grayson added a comment -

            Thank you very much Jesse, Andrew, Kohsuke and anyone else

            Show
            philgrayson Phil Grayson added a comment - Thank you very much Jesse, Andrew, Kohsuke and anyone else
            Hide
            externl Joe George added a comment -

            Yea, thanks! I'm pretty excited about this fix. I can finally remove all the workarounds! 

            Show
            externl Joe George added a comment - Yea, thanks! I'm pretty excited about this fix. I can finally remove all the workarounds! 
            Hide
            reinholdfuereder Reinhold Füreder added a comment -

            Thanks for this fix!

            This also allowed to get rid of a workaround in non-parallel usage of closure, where I needed to store closure args in a closure-local variable...

            Show
            reinholdfuereder Reinhold Füreder added a comment - Thanks for this fix! This also allowed to get rid of a workaround in non-parallel usage of closure, where I needed to store closure args in a closure-local variable...
            Hide
            markl_lagendijk Mark Lagendijk added a comment -

            Awesome work!
            Now that both this bug and the binary Groovy methods bug are fixed, the experience for pipeline developers will be much better!
            It will spare everyone a lot of, 'Argh! Why isn't this working? What am I doing wrong?'-moments/hours .

            Show
            markl_lagendijk Mark Lagendijk added a comment - Awesome work! Now that both this bug and the binary Groovy methods bug are fixed, the experience for pipeline developers will be much better! It will spare everyone a lot of, 'Argh! Why isn't this working? What am I doing wrong?'-moments/hours .
            Hide
            philmcardlecg Phil McArdle added a comment -

            Apologies if there's a separate bug for this, but I thought this one would address it too. If I adapt the sample pipeline in the description like so:

            def fn = { val -> println val }
            
            stages = ['a','b']
            
            def builders = [:]
            for (stage in stages) {
                builders[stage] = { fn(stage) }
            }
            
            parallel builders
            

            Based on https://jenkins.io/doc/pipeline/examples/#parallel-multiple-nodes

            I will see the following output:

            [Pipeline] [a] { (Branch: a)
            [Pipeline] [b] { (Branch: b)
            [Pipeline] [a] echo
            [a] b
            [Pipeline] [a] }
            [Pipeline] [b] echo
            [b] b
            [Pipeline] [b] }
            

            Is there a separate bug for this then?

            At present, I use the workaround of re-assigning anything in the stage object to a local variable before using them, which looked a lot like what was happening here.

            If I reintroduce the workaround from the documentation linked, I see the correct output of course.

            Show
            philmcardlecg Phil McArdle added a comment - Apologies if there's a separate bug for this, but I thought this one would address it too. If I adapt the sample pipeline in the description like so: def fn = { val -> println val } stages = [ 'a' , 'b' ] def builders = [:] for (stage in stages) { builders[stage] = { fn(stage) } } parallel builders Based on https://jenkins.io/doc/pipeline/examples/#parallel-multiple-nodes I will see the following output: [Pipeline] [a] { (Branch: a) [Pipeline] [b] { (Branch: b) [Pipeline] [a] echo [a] b [Pipeline] [a] } [Pipeline] [b] echo [b] b [Pipeline] [b] } Is there a separate bug for this then? At present, I use the workaround of re-assigning anything in the stage object to a local variable before using them, which looked a lot like what was happening here. If I reintroduce the workaround from the documentation linked, I see the correct output of course.
            Hide
            philgrayson Phil Grayson added a comment -

            Phil McArdle

            What you described is not related to this bug. See Jesses' comment from 2017-02-22 22:07.

            See also http://www.teknically-speaking.com/2013/01/closures-in-loops-javascript-gotchas.html

            Show
            philgrayson Phil Grayson added a comment - Phil McArdle What you described is not related to this bug. See Jesses' comment from 2017-02-22 22:07. See also  http://www.teknically-speaking.com/2013/01/closures-in-loops-javascript-gotchas.html
            Hide
            philmcardlecg Phil McArdle added a comment -

            Ah, I hadn't seen that comment. Didn't understand this was a general Groovy issue. Thanks muchly

            Show
            philmcardlecg Phil McArdle added a comment - Ah, I hadn't seen that comment. Didn't understand this was a general Groovy issue. Thanks muchly
            Hide
            philmcardlecg Phil McArdle added a comment -

            On the off-chance anyone else has my use case, Phil Grayson's comment above helped me remember why I wasn't using an iterator, and than I realised that JENKINS-26481 had been fixed in the preceding version of the same plugin, so I'm able to use .each in my example code again and get the desired results (and also in my actual pipeline code).

            Show
            philmcardlecg Phil McArdle added a comment - On the off-chance anyone else has my use case, Phil Grayson 's comment above helped me remember why I wasn't using an iterator, and than I realised that JENKINS-26481  had been fixed in the preceding version of the same plugin, so I'm able to use .each in my example code again and get the desired results (and also in my actual pipeline code).
            Hide
            jglick Jesse Glick added a comment -

            Right, whereas previously you needed to do

            def builders = [:]
            for (stage in ['a','b']) {
                def _stage = stage
                builders[stage] = {echo _stage}
            }
            parallel builders
            

            now you can do

            def builders = [:]
            ['a','b'].each {stage -> builders[stage] = {echo stage}}
            parallel builders
            

            Actually you can simplify a bit more

            parallel(['a','b'].collectEntries {stage -> [stage, {echo stage}]})
            

            though this currently throws up a wall of signature approval requests, some of which are actually internal calls you should not be dealing with; Andrew Bayer is working on a fix for that.

            Show
            jglick Jesse Glick added a comment - Right, whereas previously you needed to do def builders = [:] for (stage in [ 'a' , 'b' ]) { def _stage = stage builders[stage] = {echo _stage} } parallel builders now you can do def builders = [:] [ 'a' , 'b' ].each {stage -> builders[stage] = {echo stage}} parallel builders Actually you can simplify a bit more parallel([ 'a' , 'b' ].collectEntries {stage -> [stage, {echo stage}]}) though this currently throws up a wall of signature approval requests, some of which are actually internal calls you should not be dealing with; Andrew Bayer is working on a fix for that.
            Hide
            leedega Kevin Phillips added a comment - - edited

            Hmmm - I'm not sure if my case is exactly the same as this one, but it appears to me that this bug - or one similar to it - still exists. I'm running Jenkins v2.148 and Pipeline plugin v2.6 and can still break this parallel variable expansion problem, without using closures. Here's an example:

             

             

            parallel (
                "thread1": {
                    node() {
                        def a = git(branch: 'BranchA', credentialsId: 'MyCreds', url: 'url/to/git/repo.git')
                        def b = "1"
                        def c = sh(returnStdout: true, script: 'echo 1').trim() 
                        
                        echo "git commit " + a.GIT_COMMIT
                        echo "b is " + b
                        echo "c is " + c
                    }
                },    
                "thread2": {
                    node() {
                        def a = git(branch: 'BranchB', credentialsId: 'MyCreds', url: 'url/to/git/repo.git')
                        def b = "2"
                        def c = sh(returnStdout: true, script: 'echo 2').trim()
                        
                        echo "git commit " + a.GIT_COMMIT
                        echo "b is " + b
                        echo "c is " + c
                    }
                }
            )
            

            In this example, I would expect the output to be something like

            [thread1] git commit <BranchA_Hash>
            [thread1] b is 1
            [thread1] c is 1
            [thread2] git commit <BranchB_Hash>
            [thread2] b is 2
            [thread2] c is 2

            What I actually get is this:

            [thread1] git commit <BranchA_Hash>
            [thread1] b is 1
            [thread1] c is 1
            [thread2] git commit <BranchA_Hash>
            [thread2] b is 2
            [thread2] c is 2

            While the 'b' and 'c' variables seem to get unique values for each of the parallel threads, the 'a' value becomes ambiguous, always returning a single Git commit hash.

            Further, I've confirmed that which Git commit hash that is shown represents the one that is executed "first" in the parallel stages. So, for example, if you put a small "sleep 5" at the top of thread1, then you get the Git commit has from BranchB in both echo statements, and if you move the sleep statement to the top of thread2 you get the has from BranchA.

            Also interesting is that although the 'a' variable seems to get confused based on the thread execution, the other variables do not. I thought at first that maybe assigning a static value to the variable may have explained the discrepancy like in the case of the 'b' variable so I added a dynamically executed shell step to the mix for variable 'c' and I get the same results as b... so I'm not sure how / why / what is going on here.

            Show
            leedega Kevin Phillips added a comment - - edited Hmmm - I'm not sure if my case is exactly the same as this one, but it appears to me that this bug - or one similar to it - still exists. I'm running Jenkins v2.148 and Pipeline plugin v2.6 and can still break this parallel variable expansion problem, without using closures. Here's an example:     parallel ( "thread1" : { node() { def a = git(branch: 'BranchA' , credentialsId: 'MyCreds' , url: 'url/to/git/repo.git' ) def b = "1" def c = sh(returnStdout: true , script: 'echo 1' ).trim() echo "git commit " + a.GIT_COMMIT echo "b is " + b echo "c is " + c } }, "thread2" : { node() { def a = git(branch: 'BranchB' , credentialsId: 'MyCreds' , url: 'url/to/git/repo.git' ) def b = "2" def c = sh(returnStdout: true , script: 'echo 2' ).trim() echo "git commit " + a.GIT_COMMIT echo "b is " + b echo "c is " + c } } ) In this example, I would expect the output to be something like [thread1] git commit <BranchA_Hash> [thread1] b is 1 [thread1] c is 1 [thread2] git commit <BranchB_Hash> [thread2] b is 2 [thread2] c is 2 What I actually get is this: [thread1] git commit <BranchA_Hash> [thread1] b is 1 [thread1] c is 1 [thread2] git commit <BranchA_Hash> [thread2] b is 2 [thread2] c is 2 While the 'b' and 'c' variables seem to get unique values for each of the parallel threads, the 'a' value becomes ambiguous, always returning a single Git commit hash. Further, I've confirmed that which Git commit hash that is shown represents the one that is executed "first" in the parallel stages. So, for example, if you put a small "sleep 5" at the top of thread1, then you get the Git commit has from BranchB in both echo statements, and if you move the sleep statement to the top of thread2 you get the has from BranchA. Also interesting is that although the 'a' variable seems to get confused based on the thread execution, the other variables do not. I thought at first that maybe assigning a static value to the variable may have explained the discrepancy like in the case of the 'b' variable so I added a dynamically executed shell step to the mix for variable 'c' and I get the same results as b... so I'm not sure how / why / what is going on here.
            Hide
            jekeller Jacob Keller added a comment -

            so you're using the git step, and expect its output to return something unique? Is it possible that after that runs once it always returns the first branch? I suspect maybe that step isn't actually returning the value you think it does?

            Hmm, but you said if you rename the variable it works fine?? that's odd.

            Show
            jekeller Jacob Keller added a comment - so you're using the git step, and expect its output to return something unique? Is it possible that after that runs once it always returns the first branch? I suspect maybe that step isn't actually returning the value you think it does? Hmm, but you said if you rename the variable it works fine?? that's odd.
            Hide
            leedega Kevin Phillips added a comment -

            So I guess the problem as described in my example above seems to be particular to the `git` and `checkout` build steps, however I have managed to reproduce the problem using the `sh` step as well - just in a more elaborate way. I haven't reduced that one to a minimal reproducible example yet so I left that part out of my reply.

            For now the only easy-to-reproduce example I have is with those 2 specific build steps. Also, the "trick" of renaming the variable names doesn't seem to work with either of those two build steps as I had originally thought. It does, however, seem to work around the problem wrt the `sh` build step that I have also encountered in some of my production builds. It's not a "fix" of course but it has temporarily worked around the problem for me in at least one use case. I have others I need to investigate further to see if / how they relate. Will post back as soon as I know more.

            Regardless, the example script I provided above should reproduce the problem in that one context at least.

            Show
            leedega Kevin Phillips added a comment - So I guess the problem as described in my example above seems to be particular to the `git` and `checkout` build steps, however I have managed to reproduce the problem using the `sh` step as well - just in a more elaborate way. I haven't reduced that one to a minimal reproducible example yet so I left that part out of my reply. For now the only easy-to-reproduce example I have is with those 2 specific build steps. Also, the "trick" of renaming the variable names doesn't seem to work with either of those two build steps as I had originally thought. It does, however, seem to work around the problem wrt the `sh` build step that I have also encountered in some of my production builds. It's not a "fix" of course but it has temporarily worked around the problem for me in at least one use case. I have others I need to investigate further to see if / how they relate. Will post back as soon as I know more. Regardless, the example script I provided above should reproduce the problem in that one context at least.
            Hide
            jglick Jesse Glick added a comment -

            Kevin Phillips whatever you are seeing is an unrelated issue. Please do not discuss it here. Rather, file a fresh issue with steps to reproduce from scratch in workflow-scm-step-plugin.

            Show
            jglick Jesse Glick added a comment - Kevin Phillips whatever you are seeing is an unrelated issue. Please do not discuss it here. Rather, file a fresh issue with steps to reproduce from scratch in workflow-scm-step-plugin .

              People

              Assignee:
              abayer Andrew Bayer
              Reporter:
              philgrayson Phil Grayson
              Votes:
              16 Vote for this issue
              Watchers:
              28 Start watching this issue

                Dates

                Created:
                Updated:
                Resolved: