Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-59781

sshCommand fails to flush full log on failure

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Minor Minor
    • ssh-steps-plugin
    • None

      I have a Jenkins job where I'm running Ansible through sshCommand:

      sshCommand remote: remote, command: "ansible-playbook lb.yml"

      However, if the last step of this Ansible playbook fails, the plugin throws an exception, but doesn't print the error logs for that final step. Running it normally in a shell would print additional logs.

      Jenkins log:

      TASK [lb : Configure nginx conf] ******************************************
      changed: [server]
      TASK [lb : Restart nginx] *************************************************
      [Pipeline] }
      [Pipeline] // withCredentials
      [Pipeline] }
      [Pipeline] // stage
      [Pipeline] }
      [Pipeline] // withEnv
      [Pipeline] }
      [Pipeline] // node
      [Pipeline] End of Pipeline
      Also: hudson.remoting.Channel$CallSiteStackTrace: Remote call to JNLP4-connect connection from <server-ip>
              at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1741)
              at hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:356)
              at hudson.remoting.Channel.call(Channel.java:955)
              at org.jenkinsci.plugins.sshsteps.steps.CommandStep$Execution.run(CommandStep.java:72)
              at org.jenkinsci.plugins.sshsteps.util.SSHStepExecution.lambda$start$0(SSHStepExecution.java:84)
      ... (additional stack trace lines omitted)
      org.hidetake.groovy.ssh.session.BadExitStatusException: Command returned exit status 2: ansible-playbook lb.yml
      

      Shell log:

      TASK [lb : Configure nginx conf] ******************************************
      changed: [server]
      TASK [lb : Restart nginx] *************************************************
      fatal: [server]: FAILED! => {"changed": false, "msg": "Unable to start service nginx: Job for nginx.service failed because the control process exited with error code.\nSee \"systemctl status nginx.service\" and \"journalctl -xe\" for details.\n"}
      	to retry, use: --limit @/ansible/lb.retry
      
      PLAY RECAP ****************************************************************
      server   : ok=1   changed=0    unreachable=0    failed=1   
      

      I checked, and all of these log lines go to stdout.

      These additional log lines are necessary for debugging the issue (the workaround right now is to try executing the ansible script directly from the node, which isn't great) so it would be great to flush them before throwing this BadExitStatusException. 

          [JENKINS-59781] sshCommand fails to flush full log on failure

          jiangtyd Thank you for reporting this. This is a kind of known behavior and workaround is to wait a couple of seconds at the end of the pipeline to catch up on the logs. For more info refer the comment on this old Jira https://issues.jenkins-ci.org/browse/JENKINS-57765

          Let me know if you are still facing the issue after applying this workaround.. Thank you again. 

          Naresh Rayapati added a comment - jiangtyd Thank you for reporting this. This is a kind of known behavior and workaround is to wait a couple of seconds at the end of the pipeline to catch up on the logs. For more info refer the comment on this old Jira  https://issues.jenkins-ci.org/browse/JENKINS-57765 Let me know if you are still facing the issue after applying this workaround.. Thank you again. 

          jiangtyd any luck with the above suggestion?

          Naresh Rayapati added a comment - jiangtyd any luck with the above suggestion?

          Damien Jiang added a comment -

          Hi nrayapati, sorry for the late response; I've been focused on some other things in the past week.

          I again had a failure in the deploy, so I tried again.

          Adding the post stage

              post {
                  failure {
                      sleep 5
                  }
              }
          

          to the end of my pipeline does, in fact, display the error.

          This is not ideal, but should get the job done for now...

          Damien Jiang added a comment - Hi  nrayapati , sorry for the late response; I've been focused on some other things in the past week. I again had a failure in the deploy, so I tried again. Adding the post stage post { failure { sleep 5 } } to the end of my pipeline does, in fact, display the error. This is not ideal, but should get the job done for now...

          Christian Ciach added a comment - - edited

          This issue does not only apply to failures, but all logging of this plugin (even its own status logging, like "Executing command on name[server]: mycommand sudo: false").

          Looking at the code, it seems to me that the likely issue is a missing flush()-call to the CustomLogHandler (or maybe the rootLogger) at the end of this method: https://github.com/jenkinsci/ssh-steps-plugin/blob/c35e7db86e975b257343179fd4f08dd0af8cbb42/src/main/groovy/org/jenkinsci/plugins/sshsteps/SSHService.groovy#L55

          That being said.. Is it really a good Idea to register a new CustomLogHandler every time something gets logged? The rootLogger is most likely static, so adding new handlers for each and every message will eventually lead to a ton of CustomLogHandlers that are probably never garbage collected.

          Christian Ciach added a comment - - edited This issue does not only apply to failures, but all logging of this plugin (even its own status logging, like " Executing command on name [server] : mycommand sudo: false" ). Looking at the code, it seems to me that the likely issue is a missing flush()-call to the CustomLogHandler (or maybe the rootLogger) at the end of this method: https://github.com/jenkinsci/ssh-steps-plugin/blob/c35e7db86e975b257343179fd4f08dd0af8cbb42/src/main/groovy/org/jenkinsci/plugins/sshsteps/SSHService.groovy#L55 That being said.. Is it really a good Idea to register a new CustomLogHandler every time something gets logged? The rootLogger is most likely static, so adding new handlers for each and every message will eventually lead to a ton of CustomLogHandlers that are probably never garbage collected.

          sshCommand is useless if we cannot get stdout in pipeline. I need to retrieve information from remote server and I don't want to run sshCommand which generate temporary file and sshGet it

          Arnaud Bourree added a comment - sshCommand is useless if we cannot get stdout in pipeline. I need to retrieve information from remote server and I don't want to run sshCommand which generate temporary file and sshGet it

          Vlado Sloboda added a comment -

          Facing the same issue for months. Running few commands using sshCommand and if processing fails then in console log I usually see just result of the first command but the rest is missing. Impossible to figure out why (and which) command failed.

          Existing logging output timestamps does not fit to overall processing times so it's real pain to identify what was executed and when by reading the console log.

          So far I used workaround with 5 seconds sleep but this I considered as temporary till it gets fixed.  Unfortunately nothing moves in this area despite I read a lot of complaints about that behavior. Cannot believe this is so big problem to fix. Such an useful and handsome plugin but this bug urges me to leave this plugin and use some more reliable solution.

          Please consider the fix for this.

          Vlado Sloboda added a comment - Facing the same issue for months. Running few commands using sshCommand and if processing fails then in console log I usually see just result of the first command but the rest is missing. Impossible to figure out why (and which) command failed. Existing logging output timestamps does not fit to overall processing times so it's real pain to identify what was executed and when by reading the console log. So far I used workaround with 5 seconds sleep but this I considered as temporary till it gets fixed.  Unfortunately nothing moves in this area despite I read a lot of complaints about that behavior. Cannot believe this is so big problem to fix. Such an useful and handsome plugin but this bug urges me to leave this plugin and use some more reliable solution. Please consider the fix for this.

          Ilya Skorik added a comment -

          I'm also surprised it doesn't work as expected. Useless command if you can't see the cause of the error.

          Or add the ability to get the command execution code if failOnError:false is specified

          Ilya Skorik added a comment - I'm also surprised it doesn't work as expected. Useless command if you can't see the cause of the error. Or add the ability to get the command execution code if failOnError:false is specified

          any update on this one? unable to get this to work cleanly as the output is always truncated.
           
          any Good workaround/plugins/groovy code that people are using that just ssh and return output without dropping anything?

          Abdelaziz Raji added a comment - any update on this one? unable to get this to work cleanly as the output is always truncated.   any Good workaround/plugins/groovy code that people are using that just ssh and return output without dropping anything?

            nrayapati Naresh Rayapati
            jiangtyd Damien J
            Votes:
            5 Vote for this issue
            Watchers:
            11 Start watching this issue

              Created:
              Updated: