Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-55308

intermittent "terminated" messages using sh in Pipelines

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Minor Minor
    • None

      Testing Jenkins 2.138.2 LTS, Jenkins pipelines that use sh intermittently throw the following message in the console log …

       

      sh: line 1:  4449 Terminated              sleep 3

       

       … and sometimes this … 

       

      sh: line 1: 13136 Terminated              { while [ ( -d /proc/$pid -o ! -d /proc/$$ ) -a -d '/home/ec2-user/workspace/admin-smoke-test@tmp/durable-523481b0' -a ! -f '/home/ec2-user/workspace/admin-smoke-test@tmp/durable-523481b0/jenkins-result.txt' ]; do    touch '/home/ec2-user/workspace/admin-smoke-test@tmp/durable-523481b0/jenkins-log.txt'; sleep 3;{{done; }}}

       

      Jenkins master runs from a Docker image based on jenkins/jenkins:2.138.2-alpine with specific plugins baked into the image by /usr/local/bin/install-plugins.sh

      The message originates in durable-task-plugin, which must be a dependency of one of the plugins.txt plugins.  

      Two important observations:

      1) The issue does not occur when starting with the base jenkins/jenkins:2.138.2-alpine image and manually installing plugins via UI. That might suggest the issue is around how install-plugins.sh installs plugins and/or dependencies. 

      2) The issue does not occur on our production image, which is also 2.138.2-alpine + plugins built 2018-10-11. Rebuilding the the same image from the same Dockerfile results in different installed plugins. Makes me think results using install-plugins.sh are not deterministic.

          [JENKINS-55308] intermittent "terminated" messages using sh in Pipelines

          Kieron Kierzo added a comment -

          Hey,

          Had this issue on the my Mac build.

          My fix was to set the JAVA_HOME variable which was missing on the machine.

          I created and added this file on the mac : "~/.bash_profile"

          added this line....

          export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk-11.0.2.jdk/Contents/Home

          rebooted the mac and its fixed it.

          your "jdk-11.0.2.jdk" folder may be different so just replace that with whatever is in there.

          if you run the command:

          echo ${JAVA_HOME}

           

          and its empty this could be the cause.

           

          Hope this helps others.

           

          Cheers Kieron.

           

           

          Kieron Kierzo added a comment - Hey, Had this issue on the my Mac build. My fix was to set the JAVA_HOME variable which was missing on the machine. I created and added this file on the mac : "~/.bash_profile" added this line.... export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk-11.0.2.jdk/Contents/Home rebooted the mac and its fixed it. your "jdk-11.0.2.jdk" folder may be different so just replace that with whatever is in there. if you run the command: echo ${JAVA_HOME}   and its empty this could be the cause.   Hope this helps others.   Cheers Kieron.    

          Jakov Sosic added a comment -

          It seems that adding $JAVA_HOME in Global properties indeed fixes this!

          Since I run CentOS 7, this is what I've added:

          Name: JAVA_HOME

          Value: /usr/lib/jvm/jre-1.8.0-openjdk

          Jakov Sosic added a comment - It seems that adding $JAVA_HOME in Global properties indeed fixes this! Since I run CentOS 7, this is what I've added: Name: JAVA_HOME Value: /usr/lib/jvm/jre-1.8.0-openjdk

          Cuong Tran added a comment -

          Was this property added to the agent or the master?

          Cuong Tran added a comment - Was this property added to the agent or the master?

          Wen Zhou added a comment -

          if the bug is really introduced by the workflow-job plugin, can we expect a fix in the new release?

           

          Wen Zhou added a comment - if the bug is really introduced by the workflow-job plugin, can we expect a fix in the new release?  

          Pablo Rodriguez added a comment - - edited

          The issue could be reproduced by running this simple Pipeline:

          Pipeline script
          pipeline {
           agent { label 'nodelinux' }
            stages {
             stage('build') {
              steps {
               echo "Hello World!"
               sh "echo Hello from the shell"
          }
          }
          }
          }
          

          Console output:

          [Pipeline] Start of Pipeline
          [Pipeline] node
          [Pipeline] {
          [Pipeline] stage
          [Pipeline] { (build)
          [Pipeline] echo
          Hello World!
          [Pipeline] sh
          + echo Hello from the shell
          Hello from the shell
          [Pipeline] }
          [Pipeline] // stage
          [Pipeline] }
          [Pipeline] // node
          [Pipeline] End of Pipeline
          sh: line 1: 18285 Terminated sleep 3
          Finished: SUCCESS
          

          Environment:

          • CloudBees Jenkins Platform - Master 2.176.2.3

          • workflow-job:2.32 'Pipeline: Job'

          It started happening after workflow-job was updated from 2.25 to 2.32

          Pablo Rodriguez added a comment - - edited The issue could be reproduced by running this simple Pipeline: Pipeline script pipeline { agent { label 'nodelinux' } stages { stage( 'build' ) { steps { echo "Hello World!" sh "echo Hello from the shell" } } } } Console output: [Pipeline] Start of Pipeline [Pipeline] node [Pipeline] { [Pipeline] stage [Pipeline] { (build) [Pipeline] echo Hello World! [Pipeline] sh + echo Hello from the shell Hello from the shell [Pipeline] } [Pipeline] // stage [Pipeline] } [Pipeline] // node [Pipeline] End of Pipeline sh: line 1: 18285 Terminated sleep 3 Finished: SUCCESS Environment: CloudBees Jenkins Platform - Master  2.176.2.3 workflow-job:2.32 'Pipeline: Job' It started happening after workflow-job was updated from 2.25 to 2.32

          Carroll Chiou added a comment - - edited

          The sleep 3 process was introduced when the heartbeat check feature was added in version 1.16 of durable-task, which translates to 2.18 of workflow-durable-task-step, which translates to 2.27 of workflow-job.

          Unless I missed a comment, it looks like the pipeline behaves as expected outside of "Terminated sleep 3" line.

          At least in parogui’s example, this might be a race condition between the sleep 3 process and when the script completes. Like dnusbaum mentioned earlier, the sleep 3 process is used to touch the output log file to show the script is still alive. However, when the script completes, it will write to a separate result file. A watcher service is checking for that result file every 100ms. Once that result file is found, results are transmitted and everything related to that specific step’s workspace is purged. It might be possible that the output file gets cleaned up right after the sleep 3 process checks if the file still exists, but before it gets touched again?

          There is a new release of durable-task (1.31) that removes the sleep 3 process so this line won't pop up anymore.

          Update: I have not been able to reproduce this issue, so I can't say for certain if this issue is resolved. Technically, it should, but it could be possible that the new version just changes the behavior of this bug.

          Carroll Chiou added a comment - - edited The sleep 3 process was introduced when the heartbeat check feature was added in version 1.16 of durable-task, which translates to 2.18 of workflow-durable-task-step, which translates to 2.27 of workflow-job. Unless I missed a comment, it looks like the pipeline behaves as expected outside of "Terminated sleep 3" line. At least in parogui ’s example , this might be a race condition between the sleep 3 process and when the script completes. Like dnusbaum mentioned earlier , the sleep 3 process is used to touch the output log file to show the script is still alive. However, when the script completes, it will write to a separate result file. A watcher service is checking for that result file every 100ms. Once that result file is found, results are transmitted and everything related to that specific step’s workspace is purged. It might be possible that the output file gets cleaned up right after the sleep 3 process checks if the file still exists, but before it gets touched again? There is a new release of durable-task (1.31) that removes the sleep 3 process so this line won't pop up anymore. Update: I have not been able to reproduce this issue, so I can't say for certain if this issue is resolved. Technically, it should, but it could be possible that the new version just changes the behavior of this bug.

          A Alvarez added a comment -

          Can confirm that after upgrading durable-task to the latest version, the Terminated messages are gone from our jobs.

          A Alvarez added a comment - Can confirm that after upgrading durable-task to the latest version, the Terminated messages are gone from our jobs.

          Carroll Chiou added a comment -

          aal I'm curious, if you are running 1.33, are you passing in the FORCE_BINARY_WRAPPER=true system property?

          Carroll Chiou added a comment - aal I'm curious, if you are running 1.33, are you passing in the FORCE_BINARY_WRAPPER=true system property?

          A Alvarez added a comment -

          Hi carroll not as far as I'm aware although this could be a default or coming from another plugin. All we did was upgrading the plugin and the messages disappeared from the console output when running shell commands within Pipeline scripts

          A Alvarez added a comment - Hi carroll not as far as I'm aware although this could be a default or coming from another plugin. All we did was upgrading the plugin and the messages disappeared from the console output when running shell commands within Pipeline scripts

          Carroll Chiou added a comment -

          aal thanks, that's good to know

          Carroll Chiou added a comment - aal thanks, that's good to know

            carroll Carroll Chiou
            gc875 gc875
            Votes:
            24 Vote for this issue
            Watchers:
            44 Start watching this issue

              Created:
              Updated: