After updating to docker-workflow 1.15 all of my builds that folllow this pattern started exiting with status -1 or -2:

      stage ('Unit Tests') {
      try {
         dir ("./somewhere/somepath") {
            sh "./somewhere/some.sh"
         }
         def userid = sh(script: 'grep Uid /proc/self/status | cut -f2', returnStdout: true).trim()
         image.inside("-v ${env.WORKSPACE}:/target -e USER=${env.USER} -e USERID=${userid} -e BRANCH_NAME=${BRANCH_NAME} -u ${userid} --link ...") { c ->
            sh "./somewhere/someother.sh"
            junit 'nosetests.xml'
            step([$class: 'CoberturaPublisher',
              /* ... */ ])
            sh "git clean -fdx"
          }
        } finally {
          sh "./somewhere/somefinal.sh || true"
        }
      }

       

      All of the scripts here would run, but somewhere/someother.sh on some jobs would exit early and other jobs (different projects, same format) would complete successfully but still return a -1, such as:

      ERROR: script returned exit code -2

       

      Reverting to docker-workflow 1.14 alleviated the problems.

       

      I started with docker-engine 17.06 running on Debian Stretch and while diagnosing these problems upgraded to docker-ce (17.12.0-ce, build c97c6d6).  Upgrading docker made no difference.

       

          [JENKINS-49385] containers exit early in docker-workflow 1.15

          Marques Johansson created issue -
          Marques Johansson made changes -
          Description Original: After updating to docker-workflow 1.15 all of my builds that folllow this pattern started exiting with status -1 or -2:
          ERROR: script returned exit code -2
          in the middle of performing blocks like this:
          {code:java}
          sh "something.sh"{code}
          Reverting to docker-workflow 1.14 alleviated the problems.

           

          All of the scripts here would run, but somewhere/someother.sh would exit (as about with a -2) or all of the scripts would complete successfully but still return a -1.

           
          {code:java}
          stage ('Unit Tests') {
          try {
             dir ("./somewhere/somepath") {
                sh "./somewhere/some.sh"
             }
             def userid = sh(script: 'grep Uid /proc/self/status | cut -f2', returnStdout: true).trim()
             image.inside("-v ${env.WORKSPACE}:/target -e USER=${env.USER} -e USERID=${userid} -e BRANCH_NAME=${BRANCH_NAME} -u ${userid} --link ...") { c ->
                sh "./somewhere/someother.sh"
                junit 'nosetests.xml'
                step([$class: 'CoberturaPublisher',
                  /* ... */ ])
                sh "git clean -fdx"
              }
            } finally {
              sh "./somewhere/somefinal.sh || true"
            }
          }{code}
          New: After updating to docker-workflow 1.15 all of my builds that folllow this pattern started exiting with status -1 or -2:
          {code:java}
          stage ('Unit Tests') {
          try {
             dir ("./somewhere/somepath") {
                sh "./somewhere/some.sh"
             }
             def userid = sh(script: 'grep Uid /proc/self/status | cut -f2', returnStdout: true).trim()
             image.inside("-v ${env.WORKSPACE}:/target -e USER=${env.USER} -e USERID=${userid} -e BRANCH_NAME=${BRANCH_NAME} -u ${userid} --link ...") { c ->
                sh "./somewhere/someother.sh"
                junit 'nosetests.xml'
                step([$class: 'CoberturaPublisher',
                  /* ... */ ])
                sh "git clean -fdx"
              }
            } finally {
              sh "./somewhere/somefinal.sh || true"
            }
          }{code}
           

          All of the scripts here would run, but somewhere/someother.sh would exit (as about with a -2) or all of the scripts would complete successfully but still return a -1, such as:
          {code:java}
          ERROR: script returned exit code -2{code}
           

          Reverting to docker-workflow 1.14 alleviated the problems.

           

           
          Marques Johansson made changes -
          Description Original: After updating to docker-workflow 1.15 all of my builds that folllow this pattern started exiting with status -1 or -2:
          {code:java}
          stage ('Unit Tests') {
          try {
             dir ("./somewhere/somepath") {
                sh "./somewhere/some.sh"
             }
             def userid = sh(script: 'grep Uid /proc/self/status | cut -f2', returnStdout: true).trim()
             image.inside("-v ${env.WORKSPACE}:/target -e USER=${env.USER} -e USERID=${userid} -e BRANCH_NAME=${BRANCH_NAME} -u ${userid} --link ...") { c ->
                sh "./somewhere/someother.sh"
                junit 'nosetests.xml'
                step([$class: 'CoberturaPublisher',
                  /* ... */ ])
                sh "git clean -fdx"
              }
            } finally {
              sh "./somewhere/somefinal.sh || true"
            }
          }{code}
           

          All of the scripts here would run, but somewhere/someother.sh would exit (as about with a -2) or all of the scripts would complete successfully but still return a -1, such as:
          {code:java}
          ERROR: script returned exit code -2{code}
           

          Reverting to docker-workflow 1.14 alleviated the problems.

           

           
          New: After updating to docker-workflow 1.15 all of my builds that folllow this pattern started exiting with status -1 or -2:
          {code:java}
          stage ('Unit Tests') {
          try {
             dir ("./somewhere/somepath") {
                sh "./somewhere/some.sh"
             }
             def userid = sh(script: 'grep Uid /proc/self/status | cut -f2', returnStdout: true).trim()
             image.inside("-v ${env.WORKSPACE}:/target -e USER=${env.USER} -e USERID=${userid} -e BRANCH_NAME=${BRANCH_NAME} -u ${userid} --link ...") { c ->
                sh "./somewhere/someother.sh"
                junit 'nosetests.xml'
                step([$class: 'CoberturaPublisher',
                  /* ... */ ])
                sh "git clean -fdx"
              }
            } finally {
              sh "./somewhere/somefinal.sh || true"
            }
          }{code}
           

          All of the scripts here would run, but somewhere/someother.sh on some jobs would exit early and other jobs (different projects, same format) would complete successfully but still return a -1, such as:
          {code:java}
          ERROR: script returned exit code -2{code}
           

          Reverting to docker-workflow 1.14 alleviated the problems.

           

           
          Marques Johansson made changes -
          Description Original: After updating to docker-workflow 1.15 all of my builds that folllow this pattern started exiting with status -1 or -2:
          {code:java}
          stage ('Unit Tests') {
          try {
             dir ("./somewhere/somepath") {
                sh "./somewhere/some.sh"
             }
             def userid = sh(script: 'grep Uid /proc/self/status | cut -f2', returnStdout: true).trim()
             image.inside("-v ${env.WORKSPACE}:/target -e USER=${env.USER} -e USERID=${userid} -e BRANCH_NAME=${BRANCH_NAME} -u ${userid} --link ...") { c ->
                sh "./somewhere/someother.sh"
                junit 'nosetests.xml'
                step([$class: 'CoberturaPublisher',
                  /* ... */ ])
                sh "git clean -fdx"
              }
            } finally {
              sh "./somewhere/somefinal.sh || true"
            }
          }{code}
           

          All of the scripts here would run, but somewhere/someother.sh on some jobs would exit early and other jobs (different projects, same format) would complete successfully but still return a -1, such as:
          {code:java}
          ERROR: script returned exit code -2{code}
           

          Reverting to docker-workflow 1.14 alleviated the problems.

           

           
          New: After updating to docker-workflow 1.15 all of my builds that folllow this pattern started exiting with status -1 or -2:
          {code:java}
          stage ('Unit Tests') {
          try {
             dir ("./somewhere/somepath") {
                sh "./somewhere/some.sh"
             }
             def userid = sh(script: 'grep Uid /proc/self/status | cut -f2', returnStdout: true).trim()
             image.inside("-v ${env.WORKSPACE}:/target -e USER=${env.USER} -e USERID=${userid} -e BRANCH_NAME=${BRANCH_NAME} -u ${userid} --link ...") { c ->
                sh "./somewhere/someother.sh"
                junit 'nosetests.xml'
                step([$class: 'CoberturaPublisher',
                  /* ... */ ])
                sh "git clean -fdx"
              }
            } finally {
              sh "./somewhere/somefinal.sh || true"
            }
          }{code}
           

          All of the scripts here would run, but somewhere/someother.sh on some jobs would exit early and other jobs (different projects, same format) would complete successfully but still return a -1, such as:
          {code:java}
          ERROR: script returned exit code -2{code}
           

          Reverting to docker-workflow 1.14 alleviated the problems.

           

          I started with docker-engine 17.06 running on Debian Stretch and while diagnosing these problems upgraded to docker-ce (17.12.0-ce, build c97c6d6).  Upgrading docker made no difference.

           

          We experience the same error. I added a comment about the behaviour in JENKINS-49278:

          When Upgrading from docker-workflow-plugin 1.14 to 1.15 we do not only face the Error Message (JENKINS-49278) but also stopping long running Docker Containers without any reason.

          E.G. we have a Gradle Docker-Image with a "gradle" Entrypoint within the Docker-Image itself and a long running Build Process. Since the Upgrade the Docker Container gets stopped after approx. 1 Minute.

          Downgraded to 1.14 and everything works fine again.

          The Log Difference is as follows:

          docker-worlflow-plugin 1.14:
          ----------------------------------
          docker run -t -d -u 1000:1000 --name myContainer -w /var/jenkins_home/workspace/testjob --volumes-from 89f6d948fe0f285948be4705a73bbd1996db6e19ec88a4710761f0aa598b837b -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** --entrypoint cat mynexus:8083/mygradle:3.5
          
          docker-worlflow-plugin 1.15:
          ----------------------------------
          docker run -t -d -u 1000:1000 --name myContainer -w /var/jenkins_home/workspace/testjob --volumes-from 846c20a2160d3460a6a864e0e626ed3433b157f4e36d9c4747dc7c30ac331b63 -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** mynexus:8083/mygradle:3.5 cat
          docker top 8119cb13cc27981b4e285704288342131bd855e2c9cf3e0d24e7af717b94f16d
          ERROR: The container started but didn't run the expected command. Please double check your ENTRYPOINT does execute the command passed as docker run argument. See https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#entrypoint for entrypoint best practices.
          

           

          Christoph Forster added a comment - We experience the same error. I added a comment about the behaviour in  JENKINS-49278 : When Upgrading from  docker-workflow-plugin  1.14 to 1.15 we do not only face the Error Message ( JENKINS-49278 ) but also stopping long running Docker Containers without any reason. E.G. we have a Gradle Docker-Image with a "gradle" Entrypoint within the Docker-Image itself and a long running Build Process. Since the Upgrade the Docker Container gets stopped after approx. 1 Minute. Downgraded to 1.14 and everything works fine again. The Log Difference is as follows: docker-worlflow-plugin 1.14: ---------------------------------- docker run -t -d -u 1000:1000 --name myContainer -w / var /jenkins_home/workspace/testjob --volumes-from 89f6d948fe0f285948be4705a73bbd1996db6e19ec88a4710761f0aa598b837b -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** --entrypoint cat mynexus:8083/mygradle:3.5 docker-worlflow-plugin 1.15: ---------------------------------- docker run -t -d -u 1000:1000 --name myContainer -w / var /jenkins_home/workspace/testjob --volumes-from 846c20a2160d3460a6a864e0e626ed3433b157f4e36d9c4747dc7c30ac331b63 -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** mynexus:8083/mygradle:3.5 cat docker top 8119cb13cc27981b4e285704288342131bd855e2c9cf3e0d24e7af717b94f16d ERROR: The container started but didn't run the expected command. Please double check your ENTRYPOINT does execute the command passed as docker run argument. See https: //docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#entrypoint for entrypoint best practices.  

          Likewise - the recent update started breaking our regular builds that had been previously been working for several months.

          $ docker run -t -d -u 0:0 -w /opt/jenkins/workspace/Zendesk-bulk -v /opt/jenkins/workspace/Zendesk-bulk:/opt/jenkins/workspace/Zendesk-bulk:rw,z -v /opt/jenkins/workspace/Zendesk-bulk@tmp:/opt/jenkins/workspace/Zendesk-bulk@tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** a3af309c86bf5b2ad033f313de7dc875df1f71f1 cat
          $ docker top 4b400dd886c360030f74480e189bcc4f15342f47712298490ef4c0c0b6d7ce30
          ERROR: The container started but didn't run the expected command. Please double check your ENTRYPOINT does execute the command passed as docker run argument. See
          https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#entrypoint
          for entrypoint best practices.

          Scott Williams added a comment - Likewise - the recent update started breaking our regular builds that had been previously been working for several months. $ docker run -t -d -u 0:0 -w /opt/jenkins/workspace/Zendesk-bulk -v /opt/jenkins/workspace/Zendesk-bulk:/opt/jenkins/workspace/Zendesk-bulk:rw,z -v /opt/jenkins/workspace/Zendesk-bulk@tmp:/opt/jenkins/workspace/Zendesk-bulk@tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** a3af309c86bf5b2ad033f313de7dc875df1f71f1 cat $ docker top 4b400dd886c360030f74480e189bcc4f15342f47712298490ef4c0c0b6d7ce30 ERROR: The container started but didn't run the expected command. Please double check your ENTRYPOINT does execute the command passed as docker run argument. See https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#entrypoint for entrypoint best practices.

          I suspect that the issue is with "cat" being added to the end.  For example, if the entrypoint is something like:

          ENTRYPOINT ["command", "--arg1", "--arg2"]

          ...then I suspect that the docker run would then be parsed as "command --arg1 --arg2 cat", based on https://docs.docker.com/engine/reference/builder/#entrypoint

          The workaround would probably be overriding the --entrypoint for the "cat" check.

          Scott Williams added a comment - I suspect that the issue is with "cat" being added to the end.  For example, if the entrypoint is something like: ENTRYPOINT ["command", "--arg1", "--arg2"] ...then I suspect that the docker run would then be parsed as "command --arg1 --arg2 cat", based on https://docs.docker.com/engine/reference/builder/#entrypoint .  The workaround would probably be overriding the --entrypoint for the "cat" check.
          Christoph Forster made changes -
          Link New: This issue relates to JENKINS-49278 [ JENKINS-49278 ]

          jamie norman added a comment - - edited

          This is completely breaking our build process, we rely on Docker to execute tools in out pipeline on the slaves.

          I have updated the priority to critical, not sure if that is the right process, but we can't see any of our containers being successful with this defect in place.

           

           

          jamie norman added a comment - - edited This is completely breaking our build process, we rely on Docker to execute tools in out pipeline on the slaves. I have updated the priority to critical, not sure if that is the right process, but we can't see any of our containers being successful with this defect in place.    
          jamie norman made changes -
          Priority Original: Minor [ 4 ] New: Critical [ 2 ]

            ndeloof Nicolas De Loof
            displague Marques Johansson
            Votes:
            6 Vote for this issue
            Watchers:
            16 Start watching this issue

              Created:
              Updated:
              Resolved: