Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-44930

Allow sh to return exit status, stdout and stderr all at once

      Like many of the commenters on -JENKINS-26133- I'd like to be able to capture the exit status and text written to standard out at the same time.

      My current use case is calling git merge --no-edit $branches and if there was an error sending a slack notification with the output. 

      The current workaround is:

      def status = sh(returnStatus: true, script: "git merge --no-edit $branches > merge_output.txt")
      if (status != 0) {
        currentBuild.result = 'FAILED'
        def output = readFile('merge_output.txt').trim()
        slackSend channel: SLACK_CHANNEL, message: "<${env.JOB_URL}|${env.JOB_NAME}> ran into an error merging the PR branches into the ${TARGET_BRANCH} branch:\n```\n${output}\n```\n<${env.BUILD_URL}/console|See the full output>", color: 'warning', tokenCredentialId: 'slack-token'
        error 'Merge conflict'
      }
      sh 'rm merge_output.txt'

      Which works but isn't a great developer experience... It would be great if I could request an object that contained: status, stdout, and stderr.

          [JENKINS-44930] Allow sh to return exit status, stdout and stderr all at once

          andrew morton added a comment - - edited

          jglick 

          The Pipeline script is glue code for automating Jenkins operations.

          I've been thinking more about this point and agree that it's exactly the right way to look at it but I reach exactly the opposite conclusion. 

          I've got a step that runs a shell command and then based on it's exit code I want to take the output and pass it to another step. That seems like the definition of glue code. So if I try moving this logic into to "Ruby, or whatever", I'd still be limited to a single return value from the sh call (either status or stdout), but now I'm no longer able to use the slackSend command others exposed by plugins. What I'm asking for is the ability to easily glue these two steps together with out a bunch of extra code to pipe the output to a file, read the file, and then delete the file. 
           

          andrew morton added a comment - - edited jglick   The Pipeline script is glue code for automating Jenkins operations. I've been thinking more about this point and agree that it's exactly the right way to look at it but I reach exactly the opposite conclusion.  I've got a step that runs a shell command and then based on it's exit code I want to take the output and pass it to another step. That seems like the definition of glue code. So if I try moving this logic into to "Ruby, or whatever", I'd still be limited to a single return value from the sh call (either status or stdout), but now I'm no longer able to use the slackSend  command others exposed by plugins. What I'm asking for is the ability to easily glue these two steps together with out a bunch of extra code to pipe the output to a file, read the file, and then delete the file.   

          Mor L added a comment -

          This is not uncommon.

          jglick Using your own example at : 

          https://github.com/jenkinsci/pipeline-examples/blob/master/pipeline-examples/push-git-repo/pushGitRepo.Groovy

          Say the push fails - now I would like to know the reason - is it behind and i need to pull? is it permissions? etc.

          Even a simple class which has only getStdout, getStderr, getExitcode and overloaded _toString which returns getExitCode could do the trick (I bet it's more complicated than that and there are some more methods in need of overloading , but the idea is the same).

          Mor L added a comment - This is not uncommon. jglick Using your own example at :  https://github.com/jenkinsci/pipeline-examples/blob/master/pipeline-examples/push-git-repo/pushGitRepo.Groovy Say the push fails - now I would like to know the reason - is it behind and i need to pull? is it permissions? etc. Even a simple class which has only getStdout, getStderr, getExitcode and overloaded _toString which returns getExitCode could do the trick (I bet it's more complicated than that and there are some more methods in need of overloading , but the idea is the same).

          Berend Dekens added a comment -

          To chime in: we have a script which consist of 10+ lines (so not pipes), most of which is dynamic so I want to get the console output as well as the return code. Using tee or standard redirection is not doing the same as we would have to add it to all commands and if something changes the return code (like tee would from the comments above) - it prevents us from detecting what went wrong.

          So +1 to being able to get the console output and the console return code.

          Berend Dekens added a comment - To chime in: we have a script which consist of 10+ lines (so not pipes), most of which is dynamic so I want to get the console output as well as the return code. Using tee or standard redirection is not doing the same as we would have to add it to all commands and if something changes the return code (like tee would from the comments above) - it prevents us from detecting what went wrong. So +1 to being able to get the console output and the console return code.

          Daniel Beck added a comment -

          cyberwizzard

          Using tee or standard redirection is not doing the same as we would have to add it to all commands

          exec in bash without command argument will apply its output redirection to all further commands in the script.

          and if something changes the return code (like tee would from the comments above) - it prevents us from detecting what went wrong.

          set -o pipefail

          Daniel Beck added a comment - cyberwizzard Using tee or standard redirection is not doing the same as we would have to add it to all commands exec in bash without command argument will apply its output redirection to all further commands in the script. and if something changes the return code (like tee would from the comments above) - it prevents us from detecting what went wrong. set -o pipefail

          Berend Dekens added a comment - - edited

          exec in bash without command argument will apply its output redirection to all further commands in the script.

          I did not know exec could be used like that. So the suggestion is then to return the exit code and redirect bash script output to both file and console and afterwards load the generated log file using something like:

          exec &> >(tee -a "$log_file")
          echo This will be logged to the file and to the screen

          Berend Dekens added a comment - - edited exec  in bash without command argument will apply its output redirection to all further commands in the script. I did not know exec could be used like that. So the suggestion is then to return the exit code and redirect bash script output to both file and console and afterwards load the generated log file using something like: exec &> >(tee -a "$log_file" ) echo This will be logged to the file and to the screen

          Mor L added a comment -

          What if I use windows in the mix? Then I would have to use cygwin and stuff?

          And say I use multiple shells (bash, tcsh etc.) - then i'd have to do a workaround per environment?

          Long story short - this should be supported natively by the pipeline DSL.

          Mor L added a comment - What if I use windows in the mix? Then I would have to use cygwin and stuff? And say I use multiple shells (bash, tcsh etc.) - then i'd have to do a workaround per environment? Long story short - this should be supported natively by the pipeline DSL.

          Anish Dangi added a comment -

          I just voted for this feature and here is one use case: Run a shell command and retry the step based on the stdout content. I still want the output to be printed on the console, and I need the log content to analyze it subsequently.

          Anish Dangi added a comment - I just voted for this feature and here is one use case: Run a shell command and retry the step based on the stdout content. I still want the output to be printed on the console, and I need the log content to analyze it subsequently.

          +1 for having this functionality.

          Constantin Bugneac added a comment - +1 for having this functionality.

          to throw some swear in: it's a pity that introduced returnStatus completly overloads the returnStdout functionality.
          This is just - sorry - crap and https://github.com/jenkinsci/workflow-durable-task-step-plugin/pull/11 should have never passed any review! ;/
          Fixing this will produce more problems in upcoming updates, gives you real "fun at work" ... thank 'you' ;/
          To say it out loud: I'm so p***** of jenkins pipelines, can't find any palliative words for it ...

          going back to desk (assuming I'm willing to contribute):
          How do we want to rescue this? Actually I see two (hotfix) options (without revamping the complete step):
          if both, returnStatus and returnStdout are set (to true), …

          1. … returnStdout might overload returnStatus in case of success
            • this will break anyone's pipeline, that relies on the returnStatus and actually is not that "interested" in stdOut as return type in pipeline
              • "migration" step would be "easy", to remove , returnStdout: true
            • this will enable the case, that you get the StdOut in normal operation and in case of an error you'll get the return value but you are self-responsible to log stdOut and stdErr
            • this will open more problems, due to different return types in different cases (not deterministic return type String xor Integer
          2. … return value becomes a result object with getters for both information but toString()
            • this will break anyone's pipeline, that actually is interested in returnStatus (even on success)
              • "migration" step would be "easy", to remove , returnStdout: true
            • This will allow StdOut is casted in a success case, pipeline code get's more a bit more complex only if you need the statusCode

          both: ugly

          another sh step option that comes in mind:

          • adding s.th. like ignoreErrors so anytime you pass returnStdout you are not forced to try {} catch(ignore){} it

          Marcel 'childNo͡.de' Trautwein added a comment - - edited to throw some swear in: it's a pity that introduced returnStatus completly overloads the returnStdout functionality. This is just - sorry - crap and https://github.com/jenkinsci/workflow-durable-task-step-plugin/pull/11 should have never passed any review! ;/ Fixing this will produce more problems in upcoming updates, gives you real "fun at work" ... thank 'you' ;/ To say it out loud: I'm so p***** of jenkins pipelines, can't find any palliative words for it ... going back to desk (assuming I'm willing to contribute): How do we want to rescue this? Actually I see two (hotfix) options (without revamping the complete step): if both, returnStatus and returnStdout are set (to true ), … … returnStdout might overload returnStatus in case of success this will break anyone's pipeline, that relies on the returnStatus and actually is not that "interested" in stdOut as return type in pipeline "migration" step would be "easy", to remove , returnStdout: true this will enable the case, that you get the StdOut in normal operation and in case of an error you'll get the return value but you are self-responsible to log stdOut and stdErr this will open more problems, due to different return types in different cases (not deterministic return type String xor Integer … return value becomes a result object with getters for both information but toString() this will break anyone's pipeline, that actually is interested in returnStatus (even on success) "migration" step would be "easy", to remove , returnStdout: true This will allow StdOut is casted in a success case, pipeline code get's more a bit more complex only if you need the statusCode both: ugly another sh step option that comes in mind: adding s.th. like ignoreErrors so anytime you pass returnStdout you are not forced to try {} catch(ignore){} it

          Steven Sauer added a comment - - edited

          +1

          @childnode Pardon my ignorance - I'm no expert on the workings of groovy - but on your warning point of differing return types for hotfix #1, could you have the have the sh instruction return a class that has multiple implicit conversion types? I'm still not clear on it after a bit of reading, but if it is possible, that would be a backwards compatible way to implement the new functionality. May be a bit hacky though?

          If both flags are set and and assigned to a type that isn't the new class:
          Integer = statusCode
          String = stdOutput

          Steven Sauer added a comment - - edited +1 @ childnode Pardon my ignorance - I'm no expert on the workings of groovy - but on your warning point of differing return types for hotfix #1, could you have the have the sh instruction return a class that has multiple implicit conversion types? I'm still not clear on it after a bit of reading, but if it is possible, that would be a backwards compatible way to implement the new functionality. May be a bit hacky though? If both flags are set and and assigned to a type that isn't the new class: Integer = statusCode String = stdOutput

          stevensauer: I'm not a groovy guru either; at this time of my rant, I didn't googled it yet and I never came up to a bad idea (to implement s.th. that dirty really better Ideas are appreciated / welcome) like that. But FYI: as it seems to me, groovy "magic" is not limited at this point.
          entry: http://groovy-lang.org/dsls.html#_operator_overloading
          referring to: http://groovy-lang.org/semantics.html#_custom_type_coercion plus the default implementations as stated out in https://stackoverflow.com/a/1276526/529977
          (where default intUnbox is "only" applicable to Number or Charset boxes (see callee GroovyDefaultMethods.asType(Object, Class))

          aside there is a asBoolean but we're going to deep.

          a negative effect of an coercion implementation by asType might be, that "auto-unboxing" will affect pipelines where assigned type declaration is implicit

          to go ahead: What are the use-cases? I see:

          step call expected return shell exists with error code > 0
          sh "…" none pipeline fails
          sh returnStatus: true, "…" (Integer) exit code pipeline continues
          sh returnStdOut: true, "…" (String) stdOut pipeline fails
          sh returnStatus: true, returnStdOut: true, "…" tbd tbd
          pipeline continues

          Marcel 'childNo͡.de' Trautwein added a comment - - edited stevensauer : I'm not a groovy guru either; at this time of my rant, I didn't googled it yet and I never came up to a bad idea (to implement s.th. that dirty really better Ideas are appreciated / welcome ) like that. But FYI: as it seems to me, groovy "magic" is not limited at this point. entry: http://groovy-lang.org/dsls.html#_operator_overloading referring to: http://groovy-lang.org/semantics.html#_custom_type_coercion plus the default implementations as stated out in https://stackoverflow.com/a/1276526/529977 (where default intUnbox is "only" applicable to Number or Charset boxes (see callee GroovyDefaultMethods.asType(Object, Class) ) aside there is a asBoolean but we're going to deep. a negative effect of an coercion implementation by asType might be, that "auto-unboxing" will affect pipelines where assigned type declaration is implicit to go ahead: What are the use-cases? I see: step call expected return shell exists with error code > 0 sh "…" none pipeline fails sh returnStatus: true, "…" (Integer) exit code pipeline continues sh returnStdOut: true, "…" (String) stdOut pipeline fails sh returnStatus: true, returnStdOut: true, "…" tbd tbd pipeline continues

          Constantin Bugneac added a comment - +1 childnode

          Steven Sauer added a comment -

          childnode

          You are correct. I'm usually very explicit and forgot about implicit type declarations. Oops. Also, those do seem to be the current use cases.

          Hmm. From what I can see (of which I could definitely be missing something), the consistent return type being a class that can be expanded on in the future with no/little impact is the lesser of the potential evils. It also seems to have a minimal initial impact on version migration once implemented: Only calls that used both flag params to fill an implicitly declared variable need to migrate to explicit declaration of the variable. Thoughts?

          Steven Sauer added a comment - childnode You are correct. I'm usually very explicit and forgot about implicit type declarations. Oops. Also, those do seem to be the current use cases. Hmm. From what I can see (of which I could definitely be missing something), the consistent return type being a class that can be expanded on in the future with no/little impact is the lesser of the potential evils. It also seems to have a minimal initial impact on version migration once implemented: Only calls that used both flag params to fill an implicitly declared variable need to migrate to explicit declaration of the variable. Thoughts?

          Mike Ellery added a comment -

          I would like to point out that this would be particularly useful for powershell scripts since the proposed workaround of "just redirect your stuff to a file" just doesn't work in powershell. I've tried umpteen different ways to get powershell to just log all my output and it won't have any of it. Start/Stop Transcript, classic redirection, none of them work consistently or reliably in the jenkins pipeline. Just having the ability to capture the output AND the return code to know if we failed would be outstanding. As it stands right now, I either get my output or I get an exception and I have to do some pipeline trickery to handle both cases. Ugh.

          Mike Ellery added a comment - I would like to point out that this would be particularly useful for powershell scripts since the proposed workaround of "just redirect your stuff to a file" just doesn't work in powershell. I've tried umpteen different ways to get powershell to just log all my output and it won't have any of it. Start/Stop Transcript, classic redirection, none of them work consistently or reliably in the jenkins pipeline. Just having the ability to capture the output AND the return code to know if we failed would be outstanding. As it stands right now, I either get my output or I get an exception and I have to do some pipeline trickery to handle both cases. Ugh.

          mellery451: just a question: isn't the powershell (introduced by JENKINS-34581) step more suitable for your needs? https://github.com/jenkinsci/workflow-durable-task-step-plugin/blob/master/src/main/java/org/jenkinsci/plugins/workflow/steps/durable_task/PowershellScriptStep.java

          If yes, I see a different issue, that there is no functional parity between sh and powershell

          Marcel 'childNoÍ¡.de' Trautwein added a comment - mellery451 : just a question: isn't the  powershell (introduced by JENKINS-34581 ) step more suitable for your needs? https://github.com/jenkinsci/workflow-durable-task-step-plugin/blob/master/src/main/java/org/jenkinsci/plugins/workflow/steps/durable_task/PowershellScriptStep.java If yes, I see a different issue, that there is no functional parity between  sh and powershell

          Mike Ellery added a comment -

          Marcel: I'm using `powershell` in a pipeline, as described here: https://jenkins.io/doc/pipeline/steps/workflow-durable-task-step/#code-powershell-code-powershell-script. I think it mostly has parity with the `sh` step. My comment above was mainly in response to the workaround proposed in 26133 (to just redirect output from your shell to file). That's easy to do in bash and not so easy to do in powershell, at least not the way it's currently wrapped in jenkins - that has been my experience so far. Thanks.

          Mike Ellery added a comment - Marcel: I'm using `powershell` in a pipeline, as described here: https://jenkins.io/doc/pipeline/steps/workflow-durable-task-step/#code-powershell-code-powershell-script. I think it mostly has parity with the `sh` step. My comment above was mainly in response to the workaround proposed in 26133 (to just redirect output from your shell to file). That's easy to do in bash and not so easy to do in powershell, at least not the way it's currently wrapped in jenkins - that has been my experience so far. Thanks.

          +1

           

          Trying to think of a competing CI tool or programming language that doesn't have this obvious feature.  returnExit OR stdout thoroughly violates the principle of least surprise.

          Mark Pettigrew added a comment - +1   Trying to think of a competing CI tool or programming language that doesn't have this obvious feature.  returnExit OR stdout thoroughly violates the principle of least surprise .

          leemeador added a comment -

          One solution is to have sh() return the output in the case of returnStdout: true as usual and, in the case of failure, throw an exception that could be a subclass of the current failure exception from which you could extract the error return code if you catch it and the console output is present in the exception as well (so you could extract it too).

           

          leemeador added a comment - One solution is to have sh() return the output in the case of returnStdout: true as usual and, in the case of failure, throw an exception that could be a subclass of the current failure exception from which you could extract the error return code if you catch it and the console output is present in the exception as well (so you could extract it too).  

          Fernando Nasser added a comment - - edited

          Simple solution, create a returnStderr: true

          As the empty String (i.e., nothing in stderr) would mean false, it would indicate "no errors"

          If it is not empty, one not only knows it failed but also knows WHY it failed.

           

          String status = sh(returnStderr: true, script: "git merge --no-edit $branches > merge_output.txt")
          if (status) {
            currentBuild.result = 'FAILED'
            def output = readFile('merge_output.txt').trim()
            slackSend channel: SLACK_CHANNEL, message: "<${env.JOB_URL}|${env.JOB_NAME}> ran into an error merging the PR branches into the ${TARGET_BRANCH} branch:\n```\n${output}\n```\n<${env.BUILD_URL}/console|See the full output>", color: 'warning', tokenCredentialId: 'slack-token'
            error status
          }
          sh 'rm merge_output.txt'

          Fernando Nasser added a comment - - edited Simple solution, create a returnStderr: true As the empty String (i.e., nothing in stderr) would mean false, it would indicate "no errors" If it is not empty, one not only knows it failed but also knows WHY it failed.   String status = sh(returnStderr: true , script: "git merge --no-edit $branches > merge_output.txt" ) if (status) { currentBuild.result = 'FAILED' def output = readFile( 'merge_output.txt' ).trim() slackSend channel: SLACK_CHANNEL, message: "<${env.JOB_URL}|${env.JOB_NAME}> ran into an error merging the PR branches into the ${TARGET_BRANCH} branch:\n```\n${output}\n```\n<${env.BUILD_URL}/console|See the full output>" , color: 'warning' , tokenCredentialId: 'slack-token' error status } sh 'rm merge_output.txt'

          Another option:

          def status = sh(returnStatus: true, returnStdout: true, returnStderr: true, script: "git merge --no-edit $branches")

          Would return a Map, which can be seen as

          { 'rc': <return code value>,
          'stdout': <string with stdout content>,
          'stderr': <string with stderr content>
          }

          A compromise would be to have both stderr and stdout together (easy to capture?)

          { 'rc': <return code value>,
          'output': <string with stdout+stderr contents>
          }

          P.S.: The 'returnStderr' does not exist yet, I'll file a JIRA for it as it is useful independently.

          Fernando Nasser added a comment - Another option: def status = sh(returnStatus: true, returnStdout: true, returnStderr: true, script: "git merge --no-edit $branches") Would return a Map, which can be seen as { 'rc': <return code value>, 'stdout': <string with stdout content>, 'stderr': <string with stderr content> } A compromise would be to have both stderr and stdout together (easy to capture?) { 'rc': <return code value>, 'output': <string with stdout+stderr contents> } P.S.: The 'returnStderr' does not exist yet, I'll file a JIRA for it as it is useful independently.

          Edgars Batna added a comment - - edited

          This should also be implemented for Windows batch 'bat' step. Inability to return both status and stdout is a huge drawback for us. Separate stdout and stderr would be a plus.

          Once we want both it could just return a map instead of a value. This seems trivial to implement and won't break anything as returning both doesn't work already.

          Edgars Batna added a comment - - edited This should also be implemented for Windows batch 'bat' step. Inability to return both status and stdout is a huge drawback for us. Separate stdout and stderr would be a plus. Once we want both it could just return a map instead of a value. This seems trivial to implement and won't break anything as returning both doesn't work already.

          Sverre Moe added a comment - - edited

          It would be better using a specific POJO than a Map.

          def returnObj = sh(returnStatus: true, returnStdout: true, returnStderr: true, script: "git merge --no-edit $branches")
          def status = returnObj.getStatus()
          def stdout = returnObj.getStdout()
          def stderr = returnObj.getStderr()
          

          Using a Map you would need to know the keys therein, but with a POJO you can use the javadoc to know the getters.
          Perhaps the parameter flags returnStatus, returnStdout, returnStderr would not be necessary. It could just return the POJO, and you could either use it or not.

          Sverre Moe added a comment - - edited It would be better using a specific POJO than a Map. def returnObj = sh(returnStatus: true , returnStdout: true , returnStderr: true , script: "git merge --no-edit $branches" ) def status = returnObj.getStatus() def stdout = returnObj.getStdout() def stderr = returnObj.getStderr() Using a Map you would need to know the keys therein, but with a POJO you can use the javadoc to know the getters. Perhaps the parameter flags returnStatus, returnStdout, returnStderr would not be necessary. It could just return the POJO, and you could either use it or not.

          I'd rather keep the switches to avoid unnecessary overhead in the cases these are not necessary.

          Fernando Nasser added a comment - I'd rather keep the switches to avoid unnecessary overhead in the cases these are not necessary.

          We already encountered multiple use cases where we needed to parse STDOUT in case a certain exit code happened. Would be great to have this feature.

          Benjamin Heilbrunn added a comment - We already encountered multiple use cases where we needed to parse STDOUT in case a certain exit code happened. Would be great to have this feature.

          In my case, I need it as my pipeline scripts need to branch out to different path based on the status and if the status is successful, based on the output of it. It makes a lot of sense to have both status and output returned by this function. 

          Ramkumar Bangaru added a comment - In my case, I need it as my pipeline scripts need to branch out to different path based on the status and if the status is successful, based on the output of it. It makes a lot of sense to have both status and output returned by this function. 

          juan perez added a comment -

          We are using lots of python code called from the shared library, is a pain sh only returns an error code we would need a way to bubble the error message. Those solutions sounds good.

          juan perez added a comment - We are using lots of python code called from the shared library, is a pain sh only returns an error code we would need a way to bubble the error message. Those solutions sounds good.

          JD Friedrikson added a comment - - edited

          Just sharing my workaround for others:

          pipeline {
            agent {
              docker {
                image 'debian'
              }
            }
          
            stages {
              stage('script') {
                steps {
                  script {
                    status = sh(
                      returnStatus: true,
          
                      script: '''#!/bin/bash
                        exec > >(tee output.log) 2>&1
                        echo 'one: stdout'
                        >&2 echo 'one: stderr'
                        exit 1
                      '''
                    )
          
                    output = readFile('output.log').trim()
                    echo output
          
                    if (status != 0) {
                      currentBuild.result = 'UNSTABLE'
                    }
                  }
                }
              }
            }
          
            post {
              cleanup {
                deleteDir()
              }
            }
          }
          
          [Pipeline] {
          [Pipeline] stage
          [Pipeline] { (script)
          [Pipeline] script
          [Pipeline] {
          [Pipeline] sh
          one: stdout
          one: stderr
          [Pipeline] readFile
          [Pipeline] echo
          one: stdout
          one: stderr
          [Pipeline] }
          [Pipeline] // script
          [Pipeline] }
          [Pipeline] // stage
          [Pipeline] stage
          [Pipeline] { (Declarative: Post Actions)
          [Pipeline] deleteDir (show)
          [Pipeline] }
          [Pipeline] // stage
          [Pipeline] }
          [Pipeline] // withDockerContainer
          [Pipeline] }
          [Pipeline] // node
          [Pipeline] End of Pipeline
          Finished: UNSTABLE
          

          JD Friedrikson added a comment - - edited Just sharing my workaround for others: pipeline { agent { docker { image 'debian' } } stages { stage( 'script' ) { steps { script { status = sh( returnStatus: true , script: '''#!/bin/bash exec > >(tee output.log) 2>&1 echo 'one: stdout' >&2 echo 'one: stderr' exit 1 ''' ) output = readFile( 'output.log' ).trim() echo output if (status != 0) { currentBuild.result = 'UNSTABLE' } } } } } post { cleanup { deleteDir() } } } [Pipeline] { [Pipeline] stage [Pipeline] { (script) [Pipeline] script [Pipeline] { [Pipeline] sh one: stdout one: stderr [Pipeline] readFile [Pipeline] echo one: stdout one: stderr [Pipeline] } [Pipeline] // script [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (Declarative: Post Actions) [Pipeline] deleteDir (show) [Pipeline] } [Pipeline] // stage [Pipeline] } [Pipeline] // withDockerContainer [Pipeline] } [Pipeline] // node [Pipeline] End of Pipeline Finished: UNSTABLE

          By not making the Jenkins code "more complex" to support a common use case, Jenkins users everywhere need to make their code more complex. I'm certain the sum of "complexity" is much greater now than it would be if `sh` implemented this request. Talk about externalities.

          Giovanni Tirloni added a comment - By not making the Jenkins code "more complex" to support a common use case, Jenkins users everywhere need to make their code more complex. I'm certain the sum of "complexity" is much greater now than it would be if `sh` implemented this request. Talk about externalities.

          Nils El-Himoud added a comment - - edited
          int exitStatus = sh(script: "curl -s -m ${timeoutSeconds} -w %{http_code} ${url}", returnStatus: true)
          int httpStatus = sh(script: "curl -s -m ${timeoutSeconds} -w %{http_code} ${url}", returnStdout: true)
          
          

          Curl: exitStatus, httpStatus. Chose one!

          Nils El-Himoud added a comment - - edited int exitStatus = sh(script: "curl -s -m ${timeoutSeconds} -w %{http_code} ${url}" , returnStatus: true ) int httpStatus = sh(script: "curl -s -m ${timeoutSeconds} -w %{http_code} ${url}" , returnStdout: true ) Curl: exitStatus, httpStatus. Chose one!

          elch where are you going with that? We don't want to run every command twice.

          Aaron D. Marasco added a comment - elch where are you going with that? We don't want to run every command twice.

          I believe command "sh" should always return an object with all relevant output (stdout, stderr, exit_code) and at the same time it should also print the output live to the console unless some argument (no_stdout) or so is given. This way you can access all at once.

          The other problem that I see now is that when "returnStdout" is provided, the output is not printed to the console. I would prefer to print the output to the console live anyway but also get the output to parse it or do something special with it.

          Mahmoud Al-Ashi added a comment - I believe command "sh" should always return an object with all relevant output (stdout, stderr, exit_code) and at the same time it should also print the output live to the console unless some argument (no_stdout) or so is given. This way you can access all at once. The other problem that I see now is that when "returnStdout" is provided, the output is not printed to the console. I would prefer to print the output to the console live anyway but also get the output to parse it or do something special with it.

          Norbert Lange added a comment -

          I would really like this to be supported, as the use of the warnings plugin is otherwise really cumbersome and invasive, see JENKINS-54832 for the issue and workaround.

          The usecase of

          • using the returncode to fail a build
          • using the output for displaying progress
          • re-using the output later in further plugins

          should be supported by Jenkins natively, and not require workarounds in the buildscripts

          Norbert Lange added a comment - I would really like this to be supported, as the use of the warnings plugin is otherwise really cumbersome and invasive, see JENKINS-54832 for the issue and workaround. The usecase of using the returncode to fail a build using the output for displaying progress re-using the output later in further plugins should be supported by Jenkins natively, and not require workarounds in the buildscripts

          Although I am one of the supporters for the implementation of a solution to this JIRA I have to object to a solution like:

          " I believe command "sh" should always return an object with all relevant output (stdout, stderr, exit_code) "

          Output of sysout and syserr may be too extensive, have all sort of characters, etc.  We had a system that included the output as one of the fields of a JSON output and that was always causing problems, requiring retrying, etc.

          The workaround for this JIRA that we are using for quite some time and it is working very well is to, if output is requested, write it to a file in the WORSPACE.  We do it by piping the output (sysout and/or syserr) but that should be much clear if done by the command itself.

           

          Fernando Nasser added a comment - Although I am one of the supporters for the implementation of a solution to this JIRA I have to object to a solution like: " I believe command "sh" should always return an object with all relevant output (stdout, stderr, exit_code) " Output of sysout and syserr may be too extensive, have all sort of characters, etc.  We had a system that included the output as one of the fields of a JSON output and that was always causing problems, requiring retrying, etc. The workaround for this JIRA that we are using for quite some time and it is working very well is to, if output is requested, write it to a file in the WORSPACE.  We do it by piping the output (sysout and/or syserr) but that should be much clear if done by the command itself.  

          Norbert Lange added a comment -

          fnasser: I would like and prefer a dump into a file into workspace, while also having normal output.

          I am not sure what you mean by "command", you you mean the pipeline command ("sh"), or the native script?
          The native script is cumbersome, there might even be different ways with different shells,
          if you meant the pipeline command then I am all for it.

          but some other idea would be to use a context wrappers like

          output(stdout: "build.log",  stderr: "err.log") {
             sh 'cmake --build .'
          }
          

          Norbert Lange added a comment - fnasser : I would like and prefer a dump into a file into workspace, while also having normal output. I am not sure what you mean by "command", you you mean the pipeline command ("sh"), or the native script? The native script is cumbersome, there might even be different ways with different shells, if you meant the pipeline command then I am all for it. but some other idea would be to use a context wrappers like output(stdout: "build.log", stderr: "err.log") { sh 'cmake --build .' }

          Suraj Sharma added a comment - - edited

          Maybe this small function can help take care of exit status and stdout.

          def runCommand(script) {    
              echo "[runCommand:script] ${script}"
          
              def stdoutFile = "rc.${BUILD_NUMBER}.out"    
              script = script + " > " + stdoutFile
          
              def res = [:]    
              res["exitCode"] = sh(returnStatus: true, script: script)    
              res["stdout"] = sh(returnStdout: true, script: "cat " + stdoutFile)
          
              sh(returnStatus: true, script: "rm -f " + stdoutFile)
          
              echo "[runCommand:response] ${res}"    
              return res
          }
          

          Example Usage

              def response = runCommand("date")
              echo response["exitCode"]
              echo response["stdout"]
          

          Suraj Sharma added a comment - - edited Maybe this small function can help take care of exit status and stdout. def runCommand(script) { echo "[runCommand:script] ${script}" def stdoutFile = "rc.${BUILD_NUMBER}.out" script = script + " > " + stdoutFile def res = [:] res[ "exitCode" ] = sh(returnStatus: true , script: script) res[ "stdout" ] = sh(returnStdout: true , script: "cat " + stdoutFile) sh(returnStatus: true , script: "rm -f " + stdoutFile) echo "[runCommand:response] ${res}" return res } Example Usage def response = runCommand( "date" ) echo response[ "exitCode" ] echo response[ "stdout" ]

          nolange79   The "command" was in a quote, but yes, we were talking about the 'sh' step.

          In fact we use "tee" after the '|' so we are also showing the command output in the log.  

          I agree that the output should also go to the log.

          Fernando Nasser added a comment - nolange79   The "command" was in a quote, but yes, we were talking about the 'sh' step. In fact we use "tee" after the '|' so we are also showing the command output in the log.   I agree that the output should also go to the log.

          akostadinov added a comment -

          surajs21, and if you have a multiline string that function works in a surprising way. It needs to be done on the lower level by the plugin to be reliable.
          I actually wonder if we can have a console wrapper that will capture whatever output produced inside. Like AnsiColor Plugin. It will not be able to differentiate stdin from stderr though.

          akostadinov added a comment - surajs21 , and if you have a multiline string that function works in a surprising way. It needs to be done on the lower level by the plugin to be reliable. I actually wonder if we can have a console wrapper that will capture whatever output produced inside. Like AnsiColor Plugin . It will not be able to differentiate stdin from stderr though.

          David Riemens added a comment -

          I like the solution by surajs21 for smaller outputs. However I have a few calls (using 'bat') that run quite long. I am looking to:

          • capture the exitcode to see if the step passed/failed
          • capture (grep) for a few specific lines in the output from stdout or stderr that contain more info.
            I want to store that info for reporting detailed results at the end
          • see the progress of the output 'live', rather than a single dump at the end.
            Yeah, write to file with 'tee' works, but there should be better solutions ... 

          So although I'd vote for having the 'object' returned as described by fnasser it would be great if I would also be able to optionally pass a method that does the magic I need. In my case grep for some pattern in stdout, and store it such that it is available after the bat/sh step.

          I've been looking at using a 'listener' as described in eg

             https://stackoverflow.com/questions/53172023/get-console-logger-or-tasklistener-from-pipeline-script-method

          but have not found a (working) way yet to apply this in a scripted pipeline. Anyone

          David Riemens added a comment - I like the solution by surajs21 for smaller outputs. However I have a few calls (using 'bat') that run quite long. I am looking to: capture the exitcode to see if the step passed/failed capture (grep) for a few specific lines in the output from stdout or stderr that contain more info. I want to store that info for reporting detailed results at the end see the progress of the output 'live', rather than a single dump at the end. Yeah, write to file with 'tee' works, but there should be better solutions ...  So although I'd vote for having the 'object' returned as described by fnasser it would be great if I would also be able to optionally pass a method that does the magic I need. In my case grep for some pattern in stdout, and store it such that it is available after the bat/sh step. I've been looking at using a 'listener' as described in eg     https://stackoverflow.com/questions/53172023/get-console-logger-or-tasklistener-from-pipeline-script-method but have not found a (working) way yet to apply this in a scripted pipeline. Anyone

          Alexander Samoylov added a comment - - edited

          Using a temp file is an ugly workaround which has at least these obvious disadvantages:
          1. You never have 100% guarantee that you don't break parallelism. Every time you introduce one more parallel "axe" (such as release/debug conf, branch, os whatever) you must take care that the temp file has the corresponding suffix.
          2. This method won't work in some cases if a command already contains output redirection. For example, on Linux this works: rm 123 > temp.txt 2>&1. I am not sure if we can do the same on Windows. There may be more complex cases with tricky double/single quote combinations and multiple output redirections, of a command which consists of several commands, semicolon separated. At the end we always loose generality and platform-independency if we use a temp file.
          3. You must remove the temp file after the command execution, but how do you do it on the node if the pipeline stuff is executed on master? You cannot use Java method File.createTempFile() for this, because it creates the temp file on master. It means that the code which generates the file name must run on master and then you pass this name to the "sh" or "bat" when you remove it. Distinguishing to "sh" and "bat" is also losing of platform-independency by the way.

          My current solution with which tried looks like this:

          def getOSPathSep() {
              if (isUnix()) {
                  return '/'
              } else {
                   return '\\'
              }
          }
          
          def getTempDirOnNode() {
              if (isUnix()) {
                  return env.TMPDIR != null ? env.TMPDIR : '/tmp'
              } else {
                   return env.TEMP
              }
          }
          
          /*  May not work if "cmd" already contains output redirection or more complex shell syntax. */
          def runCmdOnNodeSavingExitCodeAndStdout(cmd) {
              def rc = 0
              def stdout = null
              def tempFileName = 'runCmdOnNodeSavingExitCodeAndStdout_' + UUID.randomUUID() + '.txt'
              def tempFilePath = getTempDirNode() + getOSPathSep() + tempFileName
              def tempFileHandle = new File(tempFilePath)
              
              print("Using temp file: " + tempFilePath)
              
              if (isUnix()) {
                  rc = sh(script: cmd + ' > ' + tempFilePath, returnStatus: true)
              } else {
                  rc = bat(script: cmd + ' > ' + tempFilePath, returnStatus: true);
              }
              stdout = readFile(tempFilePath).trim()
          
              // Delete temporary file from the node
              if (isUnix()) {
                  sh(script: 'rm -f ' + tempFilePath, returnStatus: true)
              } else {
                  bat(script: 'del /q ' + tempFilePath, returnStatus: true);
              }
              
              return [ rc, stdout ]
          }
          

          This workaround looks super ugly and of course I would want just say response = sh(cmd); response.getStatus() etc. without the tones of lines.
          I vote +1 for this feature. It is such a basic thing that I am surprised what else can be discussed here. just must be.

          If pipeline maintainers refuse to implement it we will probably need to write a plugin. I tried to implement a Java function which executes another arbitrary Java code of the given node, but it does not work directly from the pipeline script due to another limitation (see Pipeline scripts fails to call Java code on slave: Failed to deserialize the Callable object, however from a plugin it must work. But I still hope this feature will be implemented, because it is in high demand and I don't see any reason to not have it.

          Alexander Samoylov added a comment - - edited Using a temp file is an ugly workaround which has at least these obvious disadvantages: 1. You never have 100% guarantee that you don't break parallelism. Every time you introduce one more parallel "axe" (such as release/debug conf, branch, os whatever) you must take care that the temp file has the corresponding suffix. 2. This method won't work in some cases if a command already contains output redirection. For example, on Linux this works: rm 123 > temp.txt 2>&1. I am not sure if we can do the same on Windows. There may be more complex cases with tricky double/single quote combinations and multiple output redirections, of a command which consists of several commands, semicolon separated. At the end we always loose generality and platform-independency if we use a temp file. 3. You must remove the temp file after the command execution, but how do you do it on the node if the pipeline stuff is executed on master? You cannot use Java method File.createTempFile() for this, because it creates the temp file on master. It means that the code which generates the file name must run on master and then you pass this name to the "sh" or "bat" when you remove it. Distinguishing to "sh" and "bat" is also losing of platform-independency by the way. My current solution with which tried looks like this: def getOSPathSep() { if (isUnix()) { return '/' } else { return '\\' } } def getTempDirOnNode() { if (isUnix()) { return env.TMPDIR != null ? env.TMPDIR : '/tmp' } else { return env.TEMP } } /* May not work if "cmd" already contains output redirection or more complex shell syntax. */ def runCmdOnNodeSavingExitCodeAndStdout(cmd) { def rc = 0 def stdout = null def tempFileName = 'runCmdOnNodeSavingExitCodeAndStdout_' + UUID.randomUUID() + '.txt' def tempFilePath = getTempDirNode() + getOSPathSep() + tempFileName def tempFileHandle = new File(tempFilePath) print( "Using temp file: " + tempFilePath) if (isUnix()) { rc = sh(script: cmd + ' > ' + tempFilePath, returnStatus: true ) } else { rc = bat(script: cmd + ' > ' + tempFilePath, returnStatus: true ); } stdout = readFile(tempFilePath).trim() // Delete temporary file from the node if (isUnix()) { sh(script: 'rm -f ' + tempFilePath, returnStatus: true ) } else { bat(script: 'del /q ' + tempFilePath, returnStatus: true ); } return [ rc, stdout ] } This workaround looks super ugly and of course I would want just say response = sh(cmd); response.getStatus() etc. without the tones of lines. I vote +1 for this feature. It is such a basic thing that I am surprised what else can be discussed here. just must be. If pipeline maintainers refuse to implement it we will probably need to write a plugin. I tried to implement a Java function which executes another arbitrary Java code of the given node, but it does not work directly from the pipeline script due to another limitation (see Pipeline scripts fails to call Java code on slave: Failed to deserialize the Callable object , however from a plugin it must work. But I still hope this feature will be implemented, because it is in high demand and I don't see any reason to not have it.

          ilya s added a comment -

          Here's my use case for this:

          aws ecr describe-repositories --repository-names foo

          I want to check if a Docker repository exists. On success, this command returns a JSON blob in stdout. On failure, it returns a string into stderr. As of right now, what should be a simple case of pattern-matching on the stderr content when exit code != 0 turns into something completely different. I've not yet came to a solution, but it's disappointing that what looks like shell invocation step behaves nothing like the shell in practice.

          ilya s added a comment - Here's my use case for this: aws ecr describe-repositories --repository-names foo I want to check if a Docker repository exists. On success, this command returns a JSON blob in stdout . On failure, it returns a string into stderr . As of right now, what should be a simple case of pattern-matching on the stderr content when exit code != 0 turns into something completely different. I've not yet came to a solution, but it's disappointing that what looks like shell invocation step behaves nothing like the shell in practice.

          I can't believe that you've been debating this since 2017. It's a basic feature and wanting to do processing based on the return code and the output is not an "uncommon use case". Any chance we could get this ticket moving? 

          Erik Blomqvist added a comment - I can't believe that you've been debating this since 2017. It's a basic feature and wanting to do processing based on the return code and the output is not an "uncommon use case". Any chance we could get this ticket moving? 

          +1 for this feature ..

          A solution to not break the compatibility would be to:

          • add an extra parameter, 'returnBothOutputs'
          • return as a list the two items [ returnStatusOptionContent , returnStdoutOptionContent ]

          Uliul Carpatin added a comment - +1 for this feature .. A solution to not break the compatibility would be to: add an extra parameter, 'returnBothOutputs' return as a list the two items [ returnStatusOptionContent , returnStdoutOptionContent ]

          I signed up just to be able to up-vote this ticket.  It is an incredibly fundamental requirement, I don't understand the controversy. 

          Meghan Blanchard added a comment - I signed up just to be able to up-vote this ticket.  It is an incredibly fundamental requirement, I don't understand the controversy. 

          I like uliulcarpatin's solution that does not break existing code.

          It or a variant thereof could also include stdErr...

          Nancy Robertson added a comment - I like uliulcarpatin 's solution that does not break existing code. It or a variant thereof could also include stdErr...

          Zachary White added a comment -

          This really should just return an object, probably something similar to a Process object returned by String.execute() in Groovy. That way we can access stdout, stderr, and returnValue as needed. Outputting to file as a workaround is extremely hacky and unintuitive. It also breaks down as soon as we need to write multiple outputs to the same file as a running log.

          Was trying to implement this myself via Groovy's String.execute(), up until I realized it was always executing on master. This really is expected functionality for sh, and I was surprised to find this an open issue.

          Just return an object. Override the object's toString() to give returnStatus or returnStdout options, respectively, to help support backward compatibility. Have getReturnStatus(), getStdout(), and getStderr() properties accessible on the object so we can use them as needed.

          Zachary White added a comment - This really should just return an object, probably something similar to a Process object returned by String.execute() in Groovy. That way we can access stdout, stderr, and returnValue as needed. Outputting to file as a workaround is extremely hacky and unintuitive. It also breaks down as soon as we need to write multiple outputs to the same file as a running log. Was trying to implement this myself via Groovy's String.execute(), up until I realized it was always executing on master. This really is expected functionality for sh, and I was surprised to find this an open issue. Just return an object. Override the object's toString() to give returnStatus or returnStdout options, respectively, to help support backward compatibility. Have getReturnStatus(), getStdout(), and getStderr() properties accessible on the object so we can use them as needed.

          John Pfuntner added a comment -

          My case for doing this is to have a fail or retry only under certain conditions. I have scripts that do a lot of cloud work (aws, gcp, etc) so there are transient errors sometimes where a retry might be called for. However there are also failure scenarios that are not transient and a retry would not help. By examining the exit status and output, I could determine whether or not an operation was successful, failed, or could use a retry.

          John Pfuntner added a comment - My case for doing this is to have a fail or retry only under certain conditions. I have scripts that do a lot of cloud work (aws, gcp, etc) so there are transient errors sometimes where a retry might be called for. However there are also failure scenarios that are not transient and a retry would not help. By examining the exit status and output, I could determine whether or not an operation was successful, failed, or could use a retry.

          David Riemens added a comment -

          My use case is an SVN operation to see if a TAG exists; here the command may return on STDOUT a revision nr if the tag exists, a warning in STDERR if the tag does not exist yet, or an error on STDERR if something went wrong. Rather than a 5 line call to sh/bat, I now have a 40 line pipeline script with retry/try/catch, storing the stdout/stderr in indvidual files, and reading them back....sigh

          David Riemens added a comment - My use case is an SVN operation to see if a TAG exists; here the command may return on STDOUT a revision nr if the tag exists, a warning in STDERR if the tag does not exist yet, or an error on STDERR if something went wrong. Rather than a 5 line call to sh/bat, I now have a 40 line pipeline script with retry/try/catch, storing the stdout/stderr in indvidual files, and reading them back....sigh

          Anentropic added a comment - - edited

          I have exactly the same case as the OP

          Sad to see a sane implementation of this feature is still nowhere 3.5 years later

          And the docs for `sh` have disappeared https://www.jenkins.io/doc/pipeline/steps/workflow-durable-task-step

          :chef-kiss:

          Anentropic added a comment - - edited I have exactly the same case as the OP Sad to see a sane implementation of this feature is still nowhere 3.5 years later And the docs for `sh` have disappeared https://www.jenkins.io/doc/pipeline/steps/workflow-durable-task-step :chef-kiss:

          Levi Blaney added a comment -

          I also have wanted to be able to have the exitCode and the stdOut at the same time. About a year ago I started what I call the Jenkins Standard Library to help make building pipelines faster and easier. One of the first things I solved was these issues with `sh()`.

          @Library('jenkins-std-lib')
          import org.dsty.bash.BashClient
          import org.dsty.bash.ScriptError
          
          node() {    
          
              String msg = 'TestMessage'
          
              def bash = new BashClient(this)
          
              def result = bash("echo '${msg}'")
          
              if (result.stdOut != msg ) {
                  error('Did not contain correct output.')
              }
          
              if ( result.stdErr ) {
                  error('Should not have output anything.')
              }    
          
              if ( result.exitCode != 0 ) {
                  error('Exited with wrong code.')
              }    
          
              def exception = null
          
              try {
                  bash('fakecommand')
              } catch (ScriptError e) {
                  exception = e
              }    
          
              if ( !exception.stdErr.contains('fakecommand: command not found') ) {
                  error('Command was found.')
              }   
            
              if (exception.stdOut) {
                  error('Should not have stdOut.')
              }    
          
              if ( exception.exitCode != 127) {
                  error('Exited with wrong code.')
              }
          }
          
          

          Also not documented here is `result.output` which contains both the stdErr and stdOut but the order can't be guaranteed because bash is async. I also made the exception the same as the result because why wouldn't you want to include things like stdErr, stdOut and exitCode in your failure notification. Also `bash.silent()` which will not output ANYTHING to the build console. That way you can keep your build logs short and concise. Finally `bash.ignoreErrors()` which will return the result object even if the script errors. It also supports not sending output to the build console. 

          You can also browse the full documentation.

          I stopped working on the project when I couldn't find a way to test the code completely but about 2 weeks ago I found a new experimental way to test my library and now that I'm able to completely test the codebase I'm back to adding more features. 

          Right now the library only has the BashClient and LogClient but I think next I will add an easy way to POST and GET instead of dropping down to curl all the time. I think after that maybe github status checks or maybe a better slack/teams notification.

          It would be awesome if you would use my library and open github issues for features/feedback. If not you can always look at the BashClient code and see how I made it work and use it in your own shared libraries. The code is completely OpenSource and free. I will also work on a contributors guide if anyone is interested. 

          drewish cyberwizzard childnode mpettigr joanperez too many to tag so I will stop here.

          Levi Blaney added a comment - I also have wanted to be able to have the exitCode and the stdOut at the same time. About a year ago I started what I call the Jenkins Standard Library  to help make building pipelines faster and easier. One of the first things I solved was these issues with `sh()`. @Library( 'jenkins-std-lib' ) import org.dsty.bash.BashClient import org.dsty.bash.ScriptError node() { String msg = 'TestMessage' def bash = new BashClient( this ) def result = bash( "echo '${msg}' " ) if (result.stdOut != msg ) { error( 'Did not contain correct output.' ) } if ( result.stdErr ) { error( 'Should not have output anything.' ) } if ( result.exitCode != 0 ) { error( 'Exited with wrong code.' ) } def exception = null try { bash( 'fakecommand' ) } catch (ScriptError e) { exception = e } if ( !exception.stdErr.contains( 'fakecommand: command not found' ) ) { error( 'Command was found.' ) } if (exception.stdOut) { error( 'Should not have stdOut.' ) } if ( exception.exitCode != 127) { error( 'Exited with wrong code.' ) } } Also not documented here is `result.output` which contains both the stdErr and stdOut but the order can't be guaranteed because bash is async. I also made the exception the same as the result because why wouldn't you want to include things like stdErr, stdOut and exitCode in your failure notification. Also `bash.silent()` which will not output ANYTHING to the build console. That way you can keep your build logs short and concise. Finally `bash.ignoreErrors()` which will return the result object even if the script errors. It also supports not sending output to the build console.  You can also browse the full documentation . I stopped working on the project when I couldn't find a way to test the code completely but about 2 weeks ago I found a new experimental way to test my library and now that I'm able to completely test the codebase I'm back to adding more features.  Right now the library only has the BashClient and LogClient but I think next I will add an easy way to POST and GET instead of dropping down to curl all the time. I think after that maybe github status checks or maybe a better slack/teams notification. It would be awesome if you would use my library and open github issues for features/feedback. If not you can always look at the BashClient code and see how I made it work and use it in your own shared libraries. The code is completely OpenSource and free. I will also work on a contributors guide if anyone is interested.  drewish cyberwizzard childnode mpettigr joanperez  too many to tag so I will stop here.

          Wow, open since 2017, amazing. This is not still controversial, right? I believe there's a lot of tech debt that makes this hard, but I can't imagine why this is controversial.

          Dharma Indurthy added a comment - Wow, open since 2017, amazing. This is not still controversial, right? I believe there's a lot of tech debt that makes this hard, but I can't imagine why this is controversial.

          Haralds added a comment - - edited

           

          Hi everyone, I've idea how to improve on this. Here is exact plan:

          https://github.com/jenkinsci/workflow-durable-task-step-plugin/blob/2a88fdd885caa0d1bcf3336efc00b579b75ced82/src/main/java/org/jenkinsci/plugins/workflow/steps/durable_task/DurableTaskStep.java#L648

           

          if ((returnStatus && originalCause == null) || exitCode == 0) {  
          

           

          change to something like (in case when both stdout and status code are enabled): 

           

          if ((returnStatus && originalCause == null) || exitCode == 0 || (returnStatus && originalCause == null && returnStdout ) ) {
          

           

          Then in case if both are enabled then we will need extra if  like:

           

          if (returnStdOut && returnStatus) {
            getContext().onSuccess(["status":exitCode, "output": new String(output.produce(), StandardCharsets.UTF_8)]);
          else {
             getContext().onSuccess(returnStatus ? exitCode : returnStdout ? new String(output.produce(), StandardCharsets.UTF_8) : null);
          }  

           

          That would maintain compatibility with previous code. Also would need to update (or remove) https://github.com/jenkinsci/workflow-durable-task-step-plugin/blob/2a88fdd885caa0d1bcf3336efc00b579b75ced82/src/main/java/org/jenkinsci/plugins/workflow/steps/durable_task/DurableTaskStep.java#L168 to allow both parameters in same time.

           

          Haralds added a comment - - edited   Hi everyone, I've idea how to improve on this. Here is exact plan: https://github.com/jenkinsci/workflow-durable-task-step-plugin/blob/2a88fdd885caa0d1bcf3336efc00b579b75ced82/src/main/java/org/jenkinsci/plugins/workflow/steps/durable_task/DurableTaskStep.java#L648   if ((returnStatus && originalCause == null ) || exitCode == 0) {     change to something like (in case when both stdout and status code are enabled):    if ((returnStatus && originalCause == null ) || exitCode == 0 || (returnStatus && originalCause == null && returnStdout ) ) {   Then in case if both are enabled then we will need extra if  like:   if (returnStdOut && returnStatus) { getContext().onSuccess([ "status" :exitCode, "output" : new String (output.produce(), StandardCharsets.UTF_8)]); else { getContext().onSuccess(returnStatus ? exitCode : returnStdout ? new String (output.produce(), StandardCharsets.UTF_8) : null ); }   That would maintain compatibility with previous code. Also would need to update (or remove)  https://github.com/jenkinsci/workflow-durable-task-step-plugin/blob/2a88fdd885caa0d1bcf3336efc00b579b75ced82/src/main/java/org/jenkinsci/plugins/workflow/steps/durable_task/DurableTaskStep.java#L168  to allow both parameters in same time.  

          m added a comment -

          I have a similar issue, except it's with bat calls. Is there a similar ticket for this or should it be handled here too?

          m added a comment - I have a similar issue, except it's with bat calls. Is there a similar ticket for this or should it be handled here too?

          Ian Boudreaux added a comment -

          It would be great if Jenkins supported similar functionality for powershell as well.

          Ian Boudreaux added a comment - It would be great if Jenkins supported similar functionality for powershell as well.

          shadycuz instead of developing a curl wrapper, you can use the HTTP Request plugin. It already returns the HTTP status code and the content in a single call.

          Martin d'Anjou added a comment - shadycuz  instead of developing a curl wrapper, you can use the HTTP Request  plugin. It already returns the HTTP status code and the content in a single call.

          Levi Blaney added a comment - - edited

          deepchip I see a lot of teams struggle with plugins for whatever reason. My library uses a groovy native implementation based on pythons requests library. You can see the docs here https://javadoc.io/doc/io.github.dontshavetheyak/jenkins-std-lib/latest/org/dsty/http/Requests.html

          Levi Blaney added a comment - - edited deepchip I see a lot of teams struggle with plugins for whatever reason. My library uses a groovy native implementation based on pythons requests library. You can see the docs here https://javadoc.io/doc/io.github.dontshavetheyak/jenkins-std-lib/latest/org/dsty/http/Requests.html

          Pawel added a comment -

          Is this still unimplemented ? Really!?

          Pawel added a comment - Is this still unimplemented ? Really!?

          Daniel added a comment - - edited

          +1 for this issue, hope it helps to move it forward. It's open since 2017!

          Daniel added a comment - - edited +1 for this issue, hope it helps to move it forward. It's open since 2017!

          Noam Manos added a comment -

          Noam Manos added a comment - Hi, see my workaround for this 2017 feature https://stackoverflow.com/questions/68967642/how-to-return-stdout-and-stderr-together-with-the-status-from-a-jenkins-pipeline/77900872#77900872 Hope it helps!

            Unassigned Unassigned
            drewish andrew morton
            Votes:
            167 Vote for this issue
            Watchers:
            144 Start watching this issue

              Created:
              Updated: