I'm trying to compile ArangoDB in one of these docker containers:
      https://github.com/arangodb-helper/build-docker-containers/tree/master/distros
      It works flawless for most of the containers, however I've seen it abort without any reason for docker containers derived of these two base contaires:

      fedora:23 ubuntu:12.04

      The output in the webinterface looks like that:

      [ 24%] Building CXX object lib/CMakeFiles/arango.dir/ApplicationFeatures/ApplicationServer.cpp.o
        CXX(target) /var/lib/jenkins/workspace/ArangoDB_Release/build-EPpackage-ubuntutwelveofour/3rdParty/V8/v8/x64.release/obj.target/icui18n/third_party/icu/source/i18n/fpositer.o
        CXX(target) /var/lib/jenkins/workspace/ArangoDB_Release/build-EPpackage-ubuntutwelveofour/3rdParty/V8/v8/x64.release/obj.target/icui18n/third_party/icu/source/i18n/funcrepl.o
        CXX(target) /var/lib/jenkins/workspace/ArangoDB_Release/build-EPpackage-ubuntutwelveofour/3rdParty/V8/v8/x64.release/obj.target/icui18n/third_party/icu/source/i18n/gender.o
      
        CXX(target) /var/lib/jenkins/workspace/ArangoDB_Release/build-EPpackage-ubuntutwelveofour/3rdParty/V8/v8/x64.release/obj.target/icui18n/third_party/icu/source/i18n/gregocal.o
      [ 24%] Building CXX object 3rdParty/rocksdb/rocksdb/CMakeFiles/rocksdblib.dir/db/db_iter.cc.o
        CXX(target) /var/lib/jenkins/workspace/ArangoDB_Release/build-EPpackage-ubuntutwelveofour/3rdParty/V8/v8/x64.release/obj.target/icui18n/third_party/icu/source/i18n/gregoimp.o
      [Pipeline] stage
      [Pipeline] { (Send Notification for failed build)
      [Pipeline] sh
      [ArangoDB_Release] Running shell script
      + git --no-pager show -s --format=%ae
      [Pipeline] mail
      
      [Pipeline] }
      [Pipeline] // stage
      
      [Pipeline] }
      $ docker stop --time=1 e0c5a42869989172c87fd272a714980602d7ec6c6b1be4655589b23f88b54760
      $ docker rm -f e0c5a42869989172c87fd272a714980602d7ec6c6b1be4655589b23f88b54760
      [Pipeline] // withDockerContainer
      [Pipeline] }
      [Pipeline] // withDockerRegistry
      [Pipeline] }
      [Pipeline] // withEnv
      [Pipeline] }
      [Pipeline] // node
      [Pipeline] stage (Send Notification for build)
      Using the ‘stage’ step without a block argument is deprecated
      Entering stage Send Notification for build
      Proceeding
      [Pipeline] mail
      

      In the Ubuntu 12 container. In the Fedora container, it barely gets to start running the configure part of cmake and aborts that in a similar manner without any particular reason.

      When running the container on an interactive terminal session, the whole build goes through without any issues.

          [JENKINS-39307] pipeline docker execution aborts without reason

          Jesse Glick added a comment -

          Steps to reproduce from scratch in a minimal, self-contained test case, please.

          Jesse Glick added a comment - Steps to reproduce from scratch in a minimal, self-contained test case, please.

          Wilfried Goesgens added a comment - - edited

          This pipeline script:

          stage("test") {
              node("docker") {
                  docker.image("fedora:23").inside {
                      sh """while /bin/true; do echo "."; sleep 1; done"""
                  }
              }
          }
          

          results in this output:

          [Pipeline] stage
          [Pipeline] { (test)
          [Pipeline] node
          Running on master in /var/lib/jenkins/workspace/test fedora
          [Pipeline] {
          [Pipeline] sh
          [test fedora] Running shell script
          + docker inspect -f . fedora:23
          .
          [Pipeline] withDockerContainer
          $ docker run -t -d -u 1000:1000 -w "/var/lib/jenkins/workspace/test fedora" -v "/var/lib/jenkins/workspace/test fedora:/var/lib/jenkins/workspace/test fedora:rw" -v "/var/lib/jenkins/workspace/test fedora@tmp:/var/lib/jenkins/workspace/test fedora@tmp:rw" -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** --entrypoint cat fedora:23
          [Pipeline] {
          [Pipeline] sh
          [test fedora] Running shell script
          + /bin/true
          + echo .
          .
          + sleep 1
          [Pipeline] }
          $ docker stop --time=1 34e905df5f32c4409491799a51463c23ca8efae8fd9ab32e940f8aebb25ed034
          $ docker rm -f 34e905df5f32c4409491799a51463c23ca8efae8fd9ab32e940f8aebb25ed034
          [Pipeline] // withDockerContainer
          [Pipeline] }
          [Pipeline] // node
          [Pipeline] }
          [Pipeline] // stage
          [Pipeline] End of Pipeline
          ERROR: script returned exit code -1
          Finished: FAILURE
          

          while i.e. using the docker image 'ubuntu:16.04 will result in an endless continuing 'echo .' list.

          Wilfried Goesgens added a comment - - edited This pipeline script: stage("test") { node("docker") { docker.image("fedora:23").inside { sh """while /bin/true; do echo "."; sleep 1; done""" } } } results in this output: [Pipeline] stage [Pipeline] { (test) [Pipeline] node Running on master in /var/lib/jenkins/workspace/test fedora [Pipeline] { [Pipeline] sh [test fedora] Running shell script + docker inspect -f . fedora:23 . [Pipeline] withDockerContainer $ docker run -t -d -u 1000:1000 -w "/var/lib/jenkins/workspace/test fedora" -v "/var/lib/jenkins/workspace/test fedora:/var/lib/jenkins/workspace/test fedora:rw" -v "/var/lib/jenkins/workspace/test fedora@tmp:/var/lib/jenkins/workspace/test fedora@tmp:rw" -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** --entrypoint cat fedora:23 [Pipeline] { [Pipeline] sh [test fedora] Running shell script + /bin/true + echo . . + sleep 1 [Pipeline] } $ docker stop --time=1 34e905df5f32c4409491799a51463c23ca8efae8fd9ab32e940f8aebb25ed034 $ docker rm -f 34e905df5f32c4409491799a51463c23ca8efae8fd9ab32e940f8aebb25ed034 [Pipeline] // withDockerContainer [Pipeline] } [Pipeline] // node [Pipeline] } [Pipeline] // stage [Pipeline] End of Pipeline ERROR: script returned exit code -1 Finished: FAILURE while i.e. using the docker image 'ubuntu:16.04 will result in an endless continuing 'echo .' list.

          Jesse Glick added a comment -

          BTW please use {code:none}…{code} in JIRA to make issues legible. (I could edit your description but not your comment.)

          Jesse Glick added a comment - BTW please use { code:none}…{code } in JIRA to make issues legible. (I could edit your description but not your comment.)

          Jesse Glick added a comment -

          Possibly related to ENTRYPOINT changes, would need to dig into it.

          Also BTW the Environment field in this issue includes everything but the most relevant bit: which version of this plugin you are running.

          Jesse Glick added a comment - Possibly related to ENTRYPOINT changes, would need to dig into it. Also BTW the Environment field in this issue includes everything but the most relevant bit: which version of this plugin you are running.

          ok, added. Isn't there a markdown plugin for jira available? I'm tired of learning stuff just because. The `Style` dropdown doesn't know about code sections.

          Wilfried Goesgens added a comment - ok, added. Isn't there a markdown plugin for jira available? I'm tired of learning stuff just because. The `Style` dropdown doesn't know about code sections.

          The first time I observed this behaviour was on Oct 12 and whatever the latest released jenkins/plugins was way back then.
          Meanwhile I also saw this happen at least once with an OpenSuSE 13.1 container.

          Wilfried Goesgens added a comment - The first time I observed this behaviour was on Oct 12 and whatever the latest released jenkins/plugins was way back then. Meanwhile I also saw this happen at least once with an OpenSuSE 13.1 container.

          The Sugested workaround:

          stage("test") {
              node("docker") {
                  docker.image("fedoratwentythree/build").withRun {c ->
                      sh "cat /etc/issue; while /bin/true; do echo 'x'; sleep 1; done"
                  }
              }
          }
          

          will not terminate, but also won't work properly. the `/etc/issue` indicates that the `sh`-step is not executed inside of the docker container:

          + cat /etc/issue
          Ubuntu 16.04.1 LTS \n \l
          
          + /bin/true
          + echo x
          x
          

          Wilfried Goesgens added a comment - The Sugested workaround: stage("test") { node("docker") { docker.image("fedoratwentythree/build").withRun {c -> sh "cat /etc/issue; while /bin/true; do echo 'x'; sleep 1; done" } } } will not terminate, but also won't work properly. the `/etc/issue` indicates that the `sh`-step is not executed inside of the docker container: + cat /etc/issue Ubuntu 16.04.1 LTS \n \l + /bin/true + echo x x

          Jesse Glick added a comment -

          the sh-step is not executed inside of the docker container

          Nor is it supposed to be. withRun merely starts a container, then stops it. Anything you want to do with that container should be done via the closure parameter.

          Jesse Glick added a comment - the sh -step is not executed inside of the docker container Nor is it supposed to be. withRun merely starts a container, then stops it. Anything you want to do with that container should be done via the closure parameter.

          Some more testing, discussing with the fedora cloud team:
          https://github.com/fedora-cloud/docker-brew-fedora/issues/43

          It seems that this behaviour was introduced with FC 21->22 - as one can quickly check by changing the ID in the docker container.

          Wilfried Goesgens added a comment - Some more testing, discussing with the fedora cloud team: https://github.com/fedora-cloud/docker-brew-fedora/issues/43 It seems that this behaviour was introduced with FC 21->22 - as one can quickly check by changing the ID in the docker container.

          Wilfried Goesgens added a comment - - edited

          Ok, there is a workaround, but man thats hacky!

          One uses `nohup` to run the script itself, and after that make jenkins poll for the process still existing by examining the existence of its pid via the proc filesystem.

                BUILDSCRIPT="nohup ${BUILDSCRIPT} > nohup.out 2>&1 & PID=\$!; echo \$PID > pid; tail -f nohup.out & wait \$PID; kill %2"
                try {
                  if (VERBOSE) {
                    print(BUILDSCRIPT)
                  }
                  sh BUILDSCRIPT
                }
                catch (err) {
                  RUNNING_PID=readFile("pid").trim()
                  def stillRunning=true
                  while (stillRunning) {
                    def processStat=""
                    try{
                      scripT="cat /proc/${RUNNING_PID}/stat 2>/dev/null"
                      echo "script: ${scripT}"
                      processStat = sh(returnStdout: true, script: scripT)
                    }
                    catch (x){}
                    stillRunning=(processStat != "")
                    sleep 5
                  }
                  sh "tail -n 100 nohup.out"
                }
              }
          

          Wilfried Goesgens added a comment - - edited Ok, there is a workaround, but man thats hacky! One uses `nohup` to run the script itself, and after that make jenkins poll for the process still existing by examining the existence of its pid via the proc filesystem. BUILDSCRIPT="nohup ${BUILDSCRIPT} > nohup.out 2>&1 & PID=\$!; echo \$PID > pid; tail -f nohup.out & wait \$PID; kill %2" try { if (VERBOSE) { print(BUILDSCRIPT) } sh BUILDSCRIPT } catch (err) { RUNNING_PID=readFile("pid").trim() def stillRunning=true while (stillRunning) { def processStat="" try{ scripT="cat /proc/${RUNNING_PID}/stat 2>/dev/null" echo "script: ${scripT}" processStat = sh(returnStdout: true, script: scripT) } catch (x){} stillRunning=(processStat != "") sleep 5 } sh "tail -n 100 nohup.out" } }

          Wilfried Goesgens added a comment - - edited

          It seems as if jenkins doesn't properly detect the PID of the process:

          It tries to build a commandline like that:

          [pid 6503] execve("/usr/bin/docker", ["docker", "exec", "e58e441dbf413c4180bca9f3e8db816eed1f3d985c21036fd93d9c970174141d", "env", "ps", "-o", "pid=", "7"], [/* 15 vars */] <unfinished ...>

          which will always produce an empty result, since it should be

          "pid=7" instead

          [edit]

          if ps is not installed in the container this will fail miserably.

          Wilfried Goesgens added a comment - - edited It seems as if jenkins doesn't properly detect the PID of the process: It tries to build a commandline like that: [pid 6503] execve("/usr/bin/docker", ["docker", "exec", "e58e441dbf413c4180bca9f3e8db816eed1f3d985c21036fd93d9c970174141d", "env", "ps", "-o", "pid=", "7"] , [/* 15 vars */] <unfinished ...> which will always produce an empty result, since it should be "pid=7" instead [edit] if ps is not installed in the container this will fail miserably.

          Jesse Glick added a comment -

          Yes you need at least cat and ps in the container. Currently the plugin does not try to verify these prerequisites or provide tailored error messages.

          Jesse Glick added a comment - Yes you need at least cat and ps in the container. Currently the plugin does not try to verify these prerequisites or provide tailored error messages.

          Jesse Glick added a comment -

          Oh and no it should not be ps -o pid=7. ps -o pid= 7 is intentional.

          Jesse Glick added a comment - Oh and no it should not be ps -o pid=7 . ps -o pid= 7 is intentional.

          As discussed on IRC so its not lost - stat'ing the /proc/<pid> directory should be as portable and reduce the need for the ps command. It should also be faster, since  [ -d /proc/<pid>]

          can be done as shell internal and doesn't need to fork a ps command.

           

          Another cure would be to add a counter to the ps calls, and if that was just one attempt, and docker shut down finds running processes in this container, add a warning to the error message - whether ps is available in the container or not.

          Wilfried Goesgens added a comment - As discussed on IRC so its not lost - stat'ing the /proc/<pid> directory should be as portable and reduce the need for the ps command. It should also be faster, since  [ -d /proc/<pid>] can be done as shell internal and doesn't need to fork a ps command.   Another cure would be to add a counter to the ps calls, and if that was just one attempt, and docker shut down finds running processes in this container, add a warning to the error message - whether ps is available in the container or not.

          Ok, after some (2 days) of poking around in the code I've probably found a fix for this problem. The culprit seems to be the durable-task-plugin and not the docker-workflow-plugin.

          Basically what happens is, that the "ps" command which determines if an "sh" command is still running gets run on the wrong docker host if the docker host is anything else than localhost. This make the durable-task-plugin believe that the script terminated unexpectedly, which in turn aborts the whole script.

          The fix adds the same env vars to the ps command as were run on the sh command. This then runs the commands on the same host.

          The corresponding merge request is here: https://github.com/jenkinsci/durable-task-plugin/pull/40

          Patrick Kaufmann added a comment - Ok, after some (2 days) of poking around in the code I've probably found a fix for this problem. The culprit seems to be the durable-task-plugin and not the docker-workflow-plugin. Basically what happens is, that the "ps" command which determines if an "sh" command is still running gets run on the wrong docker host if the docker host is anything else than localhost. This make the durable-task-plugin believe that the script terminated unexpectedly, which in turn aborts the whole script. The fix adds the same env vars to the ps command as were run on the sh command. This then runs the commands on the same host. The corresponding merge request is here: https://github.com/jenkinsci/durable-task-plugin/pull/40

          kufi since my original problem was, that there was no ps command inside of that docker container I don't think your problem is the same bug.

          As suggested above using ps should be avoided alltogether so the dependency can be removed.

          Wilfried Goesgens added a comment - kufi since my original problem was, that there was no ps command inside of that docker container I don't think your problem is the same bug. As suggested above using ps should be avoided alltogether so the dependency can be removed.

          Jesse Glick added a comment -

          Possibly solved by JENKINS-47791, not sure.

          Jesse Glick added a comment - Possibly solved by  JENKINS-47791 , not sure.

            kufi Patrick Kaufmann
            dothebart Wilfried Goesgens
            Votes:
            1 Vote for this issue
            Watchers:
            5 Start watching this issue

              Created:
              Updated: