Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-48300

Pipeline shell step aborts prematurely with ERROR: script returned exit code -1

    • Icon: Bug Bug
    • Resolution: Fixed
    • Icon: Minor Minor
    • durable-task-plugin
    • None
    • durable-task 1.26

      A few of my Jenkins pipelines failed last night with this failure mode:

      01:19:19 Running on blackbox-slave2 in /var/tmp/jenkins_slaves/jenkins-regression/path/to/workspace.   [Note: this is an SSH slave]
      [Pipeline] {
      [Pipeline] ws
      01:19:19 Running in /net/nas.delphix.com/nas/regression-run-workspace/jenkins-regression/workspace@10. [Note: This is an NFS share on a NAS]nd they shouldn't take down Jenkins jobs when they do. Our Jenkins jobs used to just hang when there was a NFS outage, now the script liveness check kills the job. I view this as a regression. As flawed
      [Pipeline] {
      [Pipeline] sh
      01:20:10 [qa-gate] Running shell script
      [... script output ...]
      01:27:19 Running test_create_domain at 2017-11-29 01:27:18.887531... 
      [Pipeline] // dir
      [Pipeline] }
      [Pipeline] // ws
      [Pipeline] }
      [Pipeline] // node
      [Pipeline] }
      [Pipeline] // timestamps
      [Pipeline] }
      [Pipeline] // timeout
      ERROR: script returned exit code -1
      Finished: FAILURE
      

      As far as I can tell the script was running fine, but apparently Jenkins killed it prematurely because Jenkins didn't think the process was still alive.

      The interesting thing is that this is normally working, but failed last night at exactly the same time in multiple pipeline jobs. And I only started seeing this after upgrading durable-task-plugin from 1.14 to 1.17. I looked at the code change and saw that the main change has been the change in ProcessLiveness from using a ps-based system to a timestamp-based system. What I suspect is that the NFS server on which this workspace is hosted wasn't processing I/O operations fast enough at the time this problem occurred, so the timestamp wasn't updated even though the script continued running. Note that I am not using Docker here, this is just a regular SSH slave.

      The ps-based approach may have been suboptimal, but it was more reliable for us than the new timestamp-based approach, at least when using NFS-based workspaces. Expecting a timestamp to increase on a file every 15 seconds may be a tall order for some system and network administrators, especially over NFS – network issues can and do happen, and they shouldn't take down Jenkins jobs when they do. Our Jenkins jobs used to just hang when there was a NFS outage, now the script liveness check kills the job. I view this as a regression. As flawed as the old approach may have been, it was immune to this failure mode. Is there anything I can do here besides increasing various timeouts to avoid hitting this? The fact that no diagnostic information was printed to the Jenkins log or the SSH slave remotin log is also problematic here.

          [JENKINS-48300] Pipeline shell step aborts prematurely with ERROR: script returned exit code -1

          Basil Crow created issue -
          Basil Crow made changes -
          Link New: This issue relates to JENKINS-47791 [ JENKINS-47791 ]
          Jesse Glick made changes -
          Remote Link New: This issue links to "durable-task PR 57 (Web Link)" [ 19953 ]
          Jesse Glick made changes -
          Remote Link New: This issue links to "workflow-durable-task-step PR 62 (Web Link)" [ 19954 ]
          Sam Van Oort made changes -
          Resolution New: Fixed [ 1 ]
          Status Original: Open [ 1 ] New: Closed [ 6 ]
          Moritz Baumann made changes -
          Comment [ [~svanoort]:
          We're periodically running into this even though we don't use NFS on either the master or the slaves and even though we're using the fastest durability setting, so I did some research. It looks like between two heartbeat checks, there are a lot of network I/O operations between master and slave which can easily cause a timeout, even without NFS. Therefore, the current error message was extremely misleading in our case.

          At the very least, the error message should be changed to make people aware that the heartbeat timestamps are compared on the Jenkins master and that there are a lot of other network operations happening in between those two heartbeat checks. Without a code review of both plugins involved (Durable Task, Durable Task Step), I would have never figured that out.

          But I'm also questioning whether the defaults are sensible at all. Why should Jenkins assume that the shell process is dead just because a bunch of network operations between master and slave took more than 15 seconds to complete? That's an awfully short time span.

          Please reconsider the default value for this. I think something in the order of minutes might be more reasonable; short-term network congestion can happen from time to time and shouldn't cause builds to fail.
          ]
          Federico Naum made changes -
          Description Original: A few of my Jenkins pipelines failed last night with this failure mode:

          {noformat}
          01:19:19 Running on blackbox-slave2 in /var/tmp/jenkins_slaves/jenkins-regression/path/to/workspace. [Note: this is an SSH slave]
          [Pipeline] {
          [Pipeline] ws
          01:19:19 Running in /net/nas.delphix.com/nas/regression-run-workspace/jenkins-regression/workspace@10. [Note: This is an NFS share on a NAS]
          [Pipeline] {
          [Pipeline] sh
          01:20:10 [qa-gate] Running shell script
          [... script output ...]
          01:27:19 Running test_create_domain at 2017-11-29 01:27:18.887531...
          [Pipeline] // dir
          [Pipeline] }
          [Pipeline] // ws
          [Pipeline] }
          [Pipeline] // node
          [Pipeline] }
          [Pipeline] // timestamps
          [Pipeline] }
          [Pipeline] // timeout
          ERROR: script returned exit code -1
          Finished: FAILURE
          {noformat}

          As far as I can tell the script was running fine, but apparently Jenkins killed it prematurely because Jenkins didn't think the process was still alive.

          The interesting thing is that this is normally working, but failed last night at exactly the same time in multiple pipeline jobs. And I only started seeing this after upgrading {{durable-task-plugin}} from 1.14 to 1.17. I looked at the code change and saw that the main change has been the change in {{ProcessLiveness}} from using a {{ps}}-based system to a timestamp-based system. What I suspect is that the NFS server on which this workspace is hosted wasn't processing I/O operations fast enough at the time this problem occurred, so the timestamp wasn't updated even though the script continued running. Note that I am not using Docker here, this is just a regular SSH slave.

          The ps-based approach may have been suboptimal, but it was more reliable for us than the new timestamp-based approach, at least when using NFS-based workspaces. Expecting a timestamp to increase on a file every 15 seconds may be a tall order for some system and network administrators, especially over NFS -- network issues can and do happen, and they shouldn't take down Jenkins jobs when they do. Our Jenkins jobs used to just hang when there was a NFS outage, now the script liveness check kills the job. I view this as a regression. As flawed as the old approach may have been, it was immune to this failure mode. Is there anything I can do here besides increasing various timeouts to avoid hitting this? The fact that no diagnostic information was printed to the Jenkins log or the SSH slave remotin log is also problematic here.
          New: A few of my Jenkins pipelines failed last night with this failure mode:
          {noformat}
          01:19:19 Running on blackbox-slave2 in /var/tmp/jenkins_slaves/jenkins-regression/path/to/workspace. [Note: this is an SSH slave]
          [Pipeline] {
          [Pipeline] ws
          01:19:19 Running in /net/nas.delphix.com/nas/regression-run-workspace/jenkins-regression/workspace@10. [Note: This is an NFS share on a NAS]nd they shouldn't take down Jenkins jobs when they do. Our Jenkins jobs used to just hang when there was a NFS outage, now the script liveness check kills the job. I view this as a regression. As flawed
          [Pipeline] {
          [Pipeline] sh
          01:20:10 [qa-gate] Running shell script
          [... script output ...]
          01:27:19 Running test_create_domain at 2017-11-29 01:27:18.887531...
          [Pipeline] // dir
          [Pipeline] }
          [Pipeline] // ws
          [Pipeline] }
          [Pipeline] // node
          [Pipeline] }
          [Pipeline] // timestamps
          [Pipeline] }
          [Pipeline] // timeout
          ERROR: script returned exit code -1
          Finished: FAILURE
          {noformat}
          As far as I can tell the script was running fine, but apparently Jenkins killed it prematurely because Jenkins didn't think the process was still alive.

          The interesting thing is that this is normally working, but failed last night at exactly the same time in multiple pipeline jobs. And I only started seeing this after upgrading {{durable-task-plugin}} from 1.14 to 1.17. I looked at the code change and saw that the main change has been the change in {{ProcessLiveness}} from using a {{ps}}-based system to a timestamp-based system. What I suspect is that the NFS server on which this workspace is hosted wasn't processing I/O operations fast enough at the time this problem occurred, so the timestamp wasn't updated even though the script continued running. Note that I am not using Docker here, this is just a regular SSH slave.

          The ps-based approach may have been suboptimal, but it was more reliable for us than the new timestamp-based approach, at least when using NFS-based workspaces. Expecting a timestamp to increase on a file every 15 seconds may be a tall order for some system and network administrators, especially over NFS – network issues can and do happen, and they shouldn't take down Jenkins jobs when they do. Our Jenkins jobs used to just hang when there was a NFS outage, now the script liveness check kills the job. I view this as a regression. As flawed as the old approach may have been, it was immune to this failure mode. Is there anything I can do here besides increasing various timeouts to avoid hitting this? The fact that no diagnostic information was printed to the Jenkins log or the SSH slave remotin log is also problematic here.
          Craig Rodrigues made changes -
          Assignee New: Sam Van Oort [ svanoort ]
          Jesse Glick made changes -
          Assignee Original: Sam Van Oort [ svanoort ] New: Jesse Glick [ jglick ]
          Jesse Glick made changes -
          Resolution Original: Fixed [ 1 ]
          Status Original: Closed [ 6 ] New: Reopened [ 4 ]
          Jesse Glick made changes -
          Remote Link New: This issue links to "durable-task PR 81 (Web Link)" [ 21336 ]

            jglick Jesse Glick
            basil Basil Crow
            Votes:
            6 Vote for this issue
            Watchers:
            33 Start watching this issue

              Created:
              Updated:
              Resolved: