• Icon: Bug Bug
    • Resolution: Not A Defect
    • Icon: Major Major
    • docker-workflow-plugin
    • None
    • Docker Pipeline 1.15 + Jenkins core 2.89.3

      I noticed that after upgrade to 1.15, steps such as {{docker.image().inside }} have begun to fail with:

      java.io.IOException: Failed to run top '7924b7207cfe14b8abba497c6051504cf0de0c02b40190b3688b78d680f3ee81'. Error: Error response from daemon: Container 7924b7207cfe14b8abba497c6051504cf0de0c02b40190b3688b78d680f3ee81 is not running
      	at org.jenkinsci.plugins.docker.workflow.client.DockerClient.listProcess(DockerClient.java:140)
      	at org.jenkinsci.plugins.docker.workflow.WithContainerStep$Execution.start(WithContainerStep.java:185)
      	at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:229)
      	at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:153)
      	at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:108)
      	at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:48)
      	at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
      	at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
      	at com.cloudbees.groovy.cps.sandbox.DefaultInvoker.methodCall(DefaultInvoker.java:19)
      	at org.jenkinsci.plugins.docker.workflow.Docker$Image.inside(jar:file:/var/jenkins_home/plugins/docker-workflow/WEB-INF/lib/docker-workflow.jar!/org/jenkinsci/plugins/docker/workflow/Docker.groovy:135)
      	at org.jenkinsci.plugins.docker.workflow.Docker.node(jar:file:/var/jenkins_home/plugins/docker-workflow/WEB-INF/lib/docker-workflow.jar!/org/jenkinsci/plugins/docker/workflow/Docker.groovy:66)
      	at org.jenkinsci.plugins.docker.workflow.Docker$Image.inside(jar:file:/var/jenkins_home/plugins/docker-workflow/WEB-INF/lib/docker-workflow.jar!/org/jenkinsci/plugins/docker/workflow/Docker.groovy:123)
      	at org.jenkinsci.plugins.pipeline.modeldefinition.agent.impl.DockerPipelineScript.runImage(jar:file:/var/jenkins_home/plugins/pipeline-model-definition/WEB-INF/lib/pipeline-model-definition.jar!/org/jenkinsci/plugins/pipeline/modeldefinition/agent/impl/DockerPipelineScript.groovy:57)
      	at org.jenkinsci.plugins.pipeline.modeldefinition.agent.impl.AbstractDockerPipelineScript.configureRegistry(jar:file:/var/jenkins_home/plugins/pipeline-model-definition/WEB-INF/lib/pipeline-model-definition.jar!/org/jenkinsci/plugins/pipeline/modeldefinition/agent/impl/AbstractDockerPipelineScript.groovy:67)
      	at org.jenkinsci.plugins.pipeline.modeldefinition.agent.impl.AbstractDockerPipelineScript.run(jar:file:/var/jenkins_home/plugins/pipeline-model-definition/WEB-INF/lib/pipeline-model-definition.jar!/org/jenkinsci/plugins/pipeline/modeldefinition/agent/impl/AbstractDockerPipelineScript.groovy:53)
      	at org.jenkinsci.plugins.pipeline.modeldefinition.agent.CheckoutScript.checkoutAndRun(jar:file:/var/jenkins_home/plugins/pipeline-model-extensions/WEB-INF/lib/pipeline-model-extensions.jar!/org/jenkinsci/plugins/pipeline/modeldefinition/agent/CheckoutScript.groovy:63)
      	at ___cps.transform___(Native Method)
      	at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:57)
      	at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:109)
      	at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:82)
      	at sun.reflect.GeneratedMethodAccessor223.invoke(Unknown Source)
      	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      	at java.lang.reflect.Method.invoke(Method.java:498)
      	at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
      	at com.cloudbees.groovy.cps.impl.ClosureBlock.eval(ClosureBlock.java:46)
      	at com.cloudbees.groovy.cps.Next.step(Next.java:83)
      	at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:174)
      	at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:163)
      	at org.codehaus.groovy.runtime.GroovyCategorySupport$ThreadCategoryInfo.use(GroovyCategorySupport.java:122)
      	at org.codehaus.groovy.runtime.GroovyCategorySupport.use(GroovyCategorySupport.java:261)
      	at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:163)
      	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:19)
      	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:35)
      	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:32)
      	at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.GroovySandbox.runInSandbox(GroovySandbox.java:108)
      	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:32)
      	at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:174)
      	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:331)
      	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$200(CpsThreadGroup.java:82)
      	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:243)
      	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:231)
      	at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:64)
      	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
      	at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:112)
      	at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
      	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
      	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
      	at java.lang.Thread.run(Thread.java:748)
      Finished: FAILURE
      

      I have witnessed this with several Pipelines using docker.image.inside in Scripted Pipeline on ci.jenkins.io

      The only remediation I could find was to downgrade to 1.14

          [JENKINS-49446] Regression with 1.15 and WithContainerStep

          R. Tyler Croy added a comment -

          As best as I can tell from my research it looks like a container ID is being generated inside the Docker code which doesn't actually match the name of the container on the host?

          Quite perplexed.

          R. Tyler Croy added a comment - As best as I can tell from my research it looks like a container ID is being generated inside the Docker code which doesn't actually match the name of the container on the host? Quite perplexed.

          R. Tyler Croy added a comment -

          In further testing I was able to reproduce this in a pristine environment with the Jenkinsfile used by jenkinsci/docker.

          It seems like a simple use case such as the following works fine with 1.15:

          node {
            docker.image('node:alpine').inside {
              sh 'pwd'
            }
          }
          

          When I used the same container being used in jenkinsci/docker, koalaman/shellcheck:v0.4.6 I reproduced the error, with the following log output:

          Started by user admin
          Replayed #2
          Running in Durability level: MAX_SURVIVABILITY
          [Pipeline] node
          Running on Jenkins in /var/jenkins_home/workspace/example
          [Pipeline] {
          [Pipeline] sh
          [example] Running shell script
          + docker inspect -f . koalaman/shellcheck:v0.4.6
          .
          [Pipeline] withDockerContainer
          Jenkins seems to be running inside container 439d08e94a52ab7b4b298fa60194c16ac5406ab6eb6aa32cacb7c714ea775c70
          $ docker run -t -d -u 0:0 -w /var/jenkins_home/workspace/example --volumes-from 439d08e94a52ab7b4b298fa60194c16ac5406ab6eb6aa32cacb7c714ea775c70 -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** koalaman/shellcheck:v0.4.6 cat
          $ docker top a224b71731d78d747d4290a34ca6c178fdcccc0863800213323c990d4b9fa654
          ERROR: The container started but didn't run the expected command. Please double check your ENTRYPOINT does execute the command passed as docker run argument. See https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#entrypoint for entrypoint best practices.
          [Pipeline] {
          [Pipeline] sh
          [example] Running shell script
          Error response from daemon: Container a224b71731d78d747d4290a34ca6c178fdcccc0863800213323c990d4b9fa654 is not running
          
          [Pipeline] }
          $ docker stop --time=1 a224b71731d78d747d4290a34ca6c178fdcccc0863800213323c990d4b9fa654
          $ docker rm -f a224b71731d78d747d4290a34ca6c178fdcccc0863800213323c990d4b9fa654
          [Pipeline] // withDockerContainer
          [Pipeline] }
          [Pipeline] // node
          [Pipeline] End of Pipeline
          ERROR: script returned exit code -2
          Finished: FAILURE
          

          What's interesting to me is that there is an ENTRYPOINT error message in this log.

          The same container, executing on ci.jenkins.io with Docker Pipeline 1.15, has this log output:

          Started by user rtyler
          Connecting to https://api.github.com using jenkinsadmin/****** (GitHub access token for jenkinsadmin)
          Loading trusted files from base branch master at f5859db6513c6f4ce1d687a5cbd75556ca884c64 rather than aa517f2db161c52bee8c6bbd083ff9a10abe18e4
          Obtained Jenkinsfile from f5859db6513c6f4ce1d687a5cbd75556ca884c64
          Running in Durability level: MAX_SURVIVABILITY
          Loading library pipeline-library@master
          Attempting to resolve master from remote references...
          Found match: refs/heads/master revision ec29bbe966194298de7db8b323a816986e8a8f47
          Fetching changes from the remote Git repository
          Fetching without tags
          Checking out Revision ec29bbe966194298de7db8b323a816986e8a8f47 (master)
          Commit message: "Merge pull request #24 from jglick/jdk-stage-fix"
          [Pipeline] properties
          [Pipeline] timeout
          Timeout set to expire in 20 min
          [Pipeline] {
          [Pipeline] node
          Still waiting to schedule task
          Waiting for next available executor on docker
          Running on ubuntu-jenkinsinfra4c19b0 in /home/jenkins/workspace/Packaging_docker_PR-631-CBEVRL6YSXR25EYNHKWGBQCR2ZTCISJQY2B5SQLHV3W4GLKRZYZQ
          [Pipeline] {
          [Pipeline] deleteDir
          [Pipeline] stage
          [Pipeline] { (Checkout)
          [Pipeline] checkout
          Cloning the remote Git repository
          Cloning with configured refspecs honoured and without tags
          remote: Counting objects
          remote: Compressing objects
          Receiving objects
          Resolving deltas
          Updating references
          Fetching without tags
          Merging remotes/origin/master commit f5859db6513c6f4ce1d687a5cbd75556ca884c64 into PR head commit aa517f2db161c52bee8c6bbd083ff9a10abe18e4
          Merge succeeded, producing 8c05e8acb4f147d44292a0c103fddc8b25d6ce92
          Checking out Revision 8c05e8acb4f147d44292a0c103fddc8b25d6ce92 (PR-631)
          Commit message: "Merge commit 'f5859db6513c6f4ce1d687a5cbd75556ca884c64' into HEAD"
          First time build. Skipping changelog.
          [Pipeline] }
          [Pipeline] // stage
          [Pipeline] stage
          [Pipeline] { (shellcheck)
          [Pipeline] sh
          [Packaging_docker_PR-631-CBEVRL6YSXR25EYNHKWGBQCR2ZTCISJQY2B5SQLHV3W4GLKRZYZQ] Running shell script
          + docker inspect -f . koalaman/shellcheck:v0.4.6
          
          Error: No such object: koalaman/shellcheck:v0.4.6
          [Pipeline] sh
          [Packaging_docker_PR-631-CBEVRL6YSXR25EYNHKWGBQCR2ZTCISJQY2B5SQLHV3W4GLKRZYZQ] Running shell script
          + docker pull koalaman/shellcheck:v0.4.6
          v0.4.6: Pulling from koalaman/shellcheck
          627beaf3eaaf: Pulling fs layer
          02712c44beac: Pulling fs layer
          366df5cfa23a: Pulling fs layer
          366df5cfa23a: Verifying Checksum
          366df5cfa23a: Download complete
          627beaf3eaaf: Verifying Checksum
          627beaf3eaaf: Download complete
          02712c44beac: Verifying Checksum
          02712c44beac: Download complete
          627beaf3eaaf: Pull complete
          02712c44beac: Pull complete
          366df5cfa23a: Pull complete
          Digest: sha256:191b61e5f436fc51f22faaf2f4e0f77799f75977c7210377dd73a1a0f99ef8bd
          Status: Downloaded newer image for koalaman/shellcheck:v0.4.6
          [Pipeline] withDockerContainer
          ubuntu-jenkinsinfra4c19b0 does not seem to be running inside a container
          $ docker run -t -d -u 1000:1000 -w /home/jenkins/workspace/Packaging_docker_PR-631-CBEVRL6YSXR25EYNHKWGBQCR2ZTCISJQY2B5SQLHV3W4GLKRZYZQ -v /home/jenkins/workspace/Packaging_docker_PR-631-CBEVRL6YSXR25EYNHKWGBQCR2ZTCISJQY2B5SQLHV3W4GLKRZYZQ:/home/jenkins/workspace/Packaging_docker_PR-631-CBEVRL6YSXR25EYNHKWGBQCR2ZTCISJQY2B5SQLHV3W4GLKRZYZQ:rw,z -v /home/jenkins/workspace/Packaging_docker_PR-631-CBEVRL6YSXR25EYNHKWGBQCR2ZTCISJQY2B5SQLHV3W4GLKRZYZQ@tmp:/home/jenkins/workspace/Packaging_docker_PR-631-CBEVRL6YSXR25EYNHKWGBQCR2ZTCISJQY2B5SQLHV3W4GLKRZYZQ@tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** koalaman/shellcheck:v0.4.6 cat
          $ docker top 45fbd92fef01633fe08c09cb73d65e70baaa66c6ffa5809f8665ff7d17539052
          [Pipeline] // withDockerContainer
          [Pipeline] }
          [Pipeline] // stage
          [Pipeline] }
          [Pipeline] // node
          [Pipeline] }
          [Pipeline] // timeout
          [Pipeline] End of Pipeline
          
          GitHub has been notified of this commit’s build result
          
          java.io.IOException: Failed to run top '45fbd92fef01633fe08c09cb73d65e70baaa66c6ffa5809f8665ff7d17539052'. Error: Error response from daemon: Container 45fbd92fef01633fe08c09cb73d65e70baaa66c6ffa5809f8665ff7d17539052 is not running
          	at org.jenkinsci.plugins.docker.workflow.client.DockerClient.listProcess(DockerClient.java:140)
          	at org.jenkinsci.plugins.docker.workflow.WithContainerStep$Execution.start(WithContainerStep.java:185)
          	at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:229)
          	at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:153)
          	at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:108)
          	at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:48)
          	at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
          	at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
          	at com.cloudbees.groovy.cps.sandbox.DefaultInvoker.methodCall(DefaultInvoker.java:19)
          	at org.jenkinsci.plugins.docker.workflow.Docker$Image.inside(jar:file:/var/jenkins_home/plugins/docker-workflow/WEB-INF/lib/docker-workflow.jar!/org/jenkinsci/plugins/docker/workflow/Docker.groovy:135)
          	at org.jenkinsci.plugins.docker.workflow.Docker.node(jar:file:/var/jenkins_home/plugins/docker-workflow/WEB-INF/lib/docker-workflow.jar!/org/jenkinsci/plugins/docker/workflow/Docker.groovy:66)
          	at org.jenkinsci.plugins.docker.workflow.Docker$Image.inside(jar:file:/var/jenkins_home/plugins/docker-workflow/WEB-INF/lib/docker-workflow.jar!/org/jenkinsci/plugins/docker/workflow/Docker.groovy:123)
          	at WorkflowScript.run(WorkflowScript:21)
          	at ___cps.transform___(Native Method)
          	at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:57)
          	at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:109)
          	at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:82)
          	at sun.reflect.GeneratedMethodAccessor223.invoke(Unknown Source)
          	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
          	at java.lang.reflect.Method.invoke(Method.java:498)
          	at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
          	at com.cloudbees.groovy.cps.impl.ClosureBlock.eval(ClosureBlock.java:46)
          	at com.cloudbees.groovy.cps.Next.step(Next.java:83)
          	at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:174)
          	at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:163)
          	at org.codehaus.groovy.runtime.GroovyCategorySupport$ThreadCategoryInfo.use(GroovyCategorySupport.java:122)
          	at org.codehaus.groovy.runtime.GroovyCategorySupport.use(GroovyCategorySupport.java:261)
          	at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:163)
          	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:19)
          	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:35)
          	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:32)
          	at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.GroovySandbox.runInSandbox(GroovySandbox.java:108)
          	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:32)
          	at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:174)
          	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:331)
          	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$200(CpsThreadGroup.java:82)
          	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:243)
          	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:231)
          	at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:64)
          	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
          	at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:112)
          	at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
          	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
          	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
          	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
          	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
          	at java.lang.Thread.run(Thread.java:748)
          Finished: FAILURE
          

          Note the lack of error message about ENTRYPOINT. Of course,when I downgrade the plugin on ci.jenkins.io to Docker Pipeline 1.14, the same exact Pipeline with the same exact koalaman/shellcheck:v0.4.6 container succeeds.

          Perplexed!

          R. Tyler Croy added a comment - In further testing I was able to reproduce this in a pristine environment with the Jenkinsfile used by jenkinsci/docker . It seems like a simple use case such as the following works fine with 1.15: node { docker.image( 'node:alpine' ).inside { sh 'pwd' } } When I used the same container being used in jenkinsci/docker , koalaman/shellcheck:v0.4.6 I reproduced the error, with the following log output: Started by user admin Replayed #2 Running in Durability level: MAX_SURVIVABILITY [Pipeline] node Running on Jenkins in / var /jenkins_home/workspace/example [Pipeline] { [Pipeline] sh [example] Running shell script + docker inspect -f . koalaman/shellcheck:v0.4.6 . [Pipeline] withDockerContainer Jenkins seems to be running inside container 439d08e94a52ab7b4b298fa60194c16ac5406ab6eb6aa32cacb7c714ea775c70 $ docker run -t -d -u 0:0 -w / var /jenkins_home/workspace/example --volumes-from 439d08e94a52ab7b4b298fa60194c16ac5406ab6eb6aa32cacb7c714ea775c70 -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** koalaman/shellcheck:v0.4.6 cat $ docker top a224b71731d78d747d4290a34ca6c178fdcccc0863800213323c990d4b9fa654 ERROR: The container started but didn't run the expected command. Please double check your ENTRYPOINT does execute the command passed as docker run argument. See https: //docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#entrypoint for entrypoint best practices. [Pipeline] { [Pipeline] sh [example] Running shell script Error response from daemon: Container a224b71731d78d747d4290a34ca6c178fdcccc0863800213323c990d4b9fa654 is not running [Pipeline] } $ docker stop --time=1 a224b71731d78d747d4290a34ca6c178fdcccc0863800213323c990d4b9fa654 $ docker rm -f a224b71731d78d747d4290a34ca6c178fdcccc0863800213323c990d4b9fa654 [Pipeline] // withDockerContainer [Pipeline] } [Pipeline] // node [Pipeline] End of Pipeline ERROR: script returned exit code -2 Finished: FAILURE What's interesting to me is that there is an ENTRYPOINT error message in this log. The same container, executing on ci.jenkins.io with Docker Pipeline 1.15, has this log output: Started by user rtyler Connecting to https: //api.github.com using jenkinsadmin/****** (GitHub access token for jenkinsadmin) Loading trusted files from base branch master at f5859db6513c6f4ce1d687a5cbd75556ca884c64 rather than aa517f2db161c52bee8c6bbd083ff9a10abe18e4 Obtained Jenkinsfile from f5859db6513c6f4ce1d687a5cbd75556ca884c64 Running in Durability level: MAX_SURVIVABILITY Loading library pipeline-library@master Attempting to resolve master from remote references... Found match: refs/heads/master revision ec29bbe966194298de7db8b323a816986e8a8f47 Fetching changes from the remote Git repository Fetching without tags Checking out Revision ec29bbe966194298de7db8b323a816986e8a8f47 (master) Commit message: "Merge pull request #24 from jglick/jdk-stage-fix" [Pipeline] properties [Pipeline] timeout Timeout set to expire in 20 min [Pipeline] { [Pipeline] node Still waiting to schedule task Waiting for next available executor on docker Running on ubuntu-jenkinsinfra4c19b0 in /home/jenkins/workspace/Packaging_docker_PR-631-CBEVRL6YSXR25EYNHKWGBQCR2ZTCISJQY2B5SQLHV3W4GLKRZYZQ [Pipeline] { [Pipeline] deleteDir [Pipeline] stage [Pipeline] { (Checkout) [Pipeline] checkout Cloning the remote Git repository Cloning with configured refspecs honoured and without tags remote: Counting objects remote: Compressing objects Receiving objects Resolving deltas Updating references Fetching without tags Merging remotes/origin/master commit f5859db6513c6f4ce1d687a5cbd75556ca884c64 into PR head commit aa517f2db161c52bee8c6bbd083ff9a10abe18e4 Merge succeeded, producing 8c05e8acb4f147d44292a0c103fddc8b25d6ce92 Checking out Revision 8c05e8acb4f147d44292a0c103fddc8b25d6ce92 (PR-631) Commit message: "Merge commit 'f5859db6513c6f4ce1d687a5cbd75556ca884c64' into HEAD" First time build. Skipping changelog. [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (shellcheck) [Pipeline] sh [Packaging_docker_PR-631-CBEVRL6YSXR25EYNHKWGBQCR2ZTCISJQY2B5SQLHV3W4GLKRZYZQ] Running shell script + docker inspect -f . koalaman/shellcheck:v0.4.6 Error: No such object: koalaman/shellcheck:v0.4.6 [Pipeline] sh [Packaging_docker_PR-631-CBEVRL6YSXR25EYNHKWGBQCR2ZTCISJQY2B5SQLHV3W4GLKRZYZQ] Running shell script + docker pull koalaman/shellcheck:v0.4.6 v0.4.6: Pulling from koalaman/shellcheck 627beaf3eaaf: Pulling fs layer 02712c44beac: Pulling fs layer 366df5cfa23a: Pulling fs layer 366df5cfa23a: Verifying Checksum 366df5cfa23a: Download complete 627beaf3eaaf: Verifying Checksum 627beaf3eaaf: Download complete 02712c44beac: Verifying Checksum 02712c44beac: Download complete 627beaf3eaaf: Pull complete 02712c44beac: Pull complete 366df5cfa23a: Pull complete Digest: sha256:191b61e5f436fc51f22faaf2f4e0f77799f75977c7210377dd73a1a0f99ef8bd Status: Downloaded newer image for koalaman/shellcheck:v0.4.6 [Pipeline] withDockerContainer ubuntu-jenkinsinfra4c19b0 does not seem to be running inside a container $ docker run -t -d -u 1000:1000 -w /home/jenkins/workspace/Packaging_docker_PR-631-CBEVRL6YSXR25EYNHKWGBQCR2ZTCISJQY2B5SQLHV3W4GLKRZYZQ -v /home/jenkins/workspace/Packaging_docker_PR-631-CBEVRL6YSXR25EYNHKWGBQCR2ZTCISJQY2B5SQLHV3W4GLKRZYZQ:/home/jenkins/workspace/Packaging_docker_PR-631-CBEVRL6YSXR25EYNHKWGBQCR2ZTCISJQY2B5SQLHV3W4GLKRZYZQ:rw,z -v /home/jenkins/workspace/Packaging_docker_PR-631-CBEVRL6YSXR25EYNHKWGBQCR2ZTCISJQY2B5SQLHV3W4GLKRZYZQ@tmp:/home/jenkins/workspace/Packaging_docker_PR-631-CBEVRL6YSXR25EYNHKWGBQCR2ZTCISJQY2B5SQLHV3W4GLKRZYZQ@tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** koalaman/shellcheck:v0.4.6 cat $ docker top 45fbd92fef01633fe08c09cb73d65e70baaa66c6ffa5809f8665ff7d17539052 [Pipeline] // withDockerContainer [Pipeline] } [Pipeline] // stage [Pipeline] } [Pipeline] // node [Pipeline] } [Pipeline] // timeout [Pipeline] End of Pipeline GitHub has been notified of this commit’s build result java.io.IOException: Failed to run top '45fbd92fef01633fe08c09cb73d65e70baaa66c6ffa5809f8665ff7d17539052' . Error: Error response from daemon: Container 45fbd92fef01633fe08c09cb73d65e70baaa66c6ffa5809f8665ff7d17539052 is not running at org.jenkinsci.plugins.docker.workflow.client.DockerClient.listProcess(DockerClient.java:140) at org.jenkinsci.plugins.docker.workflow.WithContainerStep$Execution.start(WithContainerStep.java:185) at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:229) at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:153) at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:108) at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:48) at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113) at com.cloudbees.groovy.cps.sandbox.DefaultInvoker.methodCall(DefaultInvoker.java:19) at org.jenkinsci.plugins.docker.workflow.Docker$Image.inside(jar:file:/ var /jenkins_home/plugins/docker-workflow/WEB-INF/lib/docker-workflow.jar!/org/jenkinsci/plugins/docker/workflow/Docker.groovy:135) at org.jenkinsci.plugins.docker.workflow.Docker.node(jar:file:/ var /jenkins_home/plugins/docker-workflow/WEB-INF/lib/docker-workflow.jar!/org/jenkinsci/plugins/docker/workflow/Docker.groovy:66) at org.jenkinsci.plugins.docker.workflow.Docker$Image.inside(jar:file:/ var /jenkins_home/plugins/docker-workflow/WEB-INF/lib/docker-workflow.jar!/org/jenkinsci/plugins/docker/workflow/Docker.groovy:123) at WorkflowScript.run(WorkflowScript:21) at ___cps.transform___(Native Method) at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:57) at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:109) at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:82) at sun.reflect.GeneratedMethodAccessor223.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72) at com.cloudbees.groovy.cps.impl.ClosureBlock.eval(ClosureBlock.java:46) at com.cloudbees.groovy.cps.Next.step(Next.java:83) at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:174) at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:163) at org.codehaus.groovy.runtime.GroovyCategorySupport$ThreadCategoryInfo.use(GroovyCategorySupport.java:122) at org.codehaus.groovy.runtime.GroovyCategorySupport.use(GroovyCategorySupport.java:261) at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:163) at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:19) at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:35) at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:32) at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.GroovySandbox.runInSandbox(GroovySandbox.java:108) at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:32) at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:174) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:331) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$200(CpsThreadGroup.java:82) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:243) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:231) at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:64) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:112) at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang. Thread .run( Thread .java:748) Finished: FAILURE Note the lack of error message about ENTRYPOINT. Of course,when I downgrade the plugin on ci.jenkins.io to Docker Pipeline 1.14, the same exact Pipeline with the same exact koalaman/shellcheck:v0.4.6 container succeeds. Perplexed!

          R. Tyler Croy added a comment -

          I think this might be a duplicate of JENKINS-49278.

          Will test with the new 1.15.1 release to verify

          R. Tyler Croy added a comment - I think this might be a duplicate of JENKINS-49278 . Will test with the new 1.15.1 release to verify

          it seems so indeed.

          1.15.1 won't fix this by magic, you might have to pass extra arg `–entrypoint=""` if image's entrypoint isn't designed as recommended by docker for official images.

          Nicolas De Loof added a comment - it seems so indeed. 1.15.1 won't fix this by magic, you might have to pass extra arg `–entrypoint=""` if image's entrypoint isn't designed as recommended by docker for official images.

          R. Tyler Croy added a comment -

          Tested with a 1.5.1 .hpi downloaded from repo.jenkins-ci.org, still the same error with this Pipeline:

          node {
            docker.image('koalaman/shellcheck:v0.4.6').inside {
              sh 'pwd'
            }
          }
          

          (simialr to this Jenkinsfile)

          R. Tyler Croy added a comment - Tested with a 1.5.1 .hpi downloaded from repo.jenkins-ci.org, still the same error with this Pipeline: node { docker.image( 'koalaman/shellcheck:v0.4.6' ).inside { sh 'pwd' } } (simialr to this Jenkinsfile )

          R. Tyler Croy added a comment -

          ndeloof, I'm trying to understand more whether this is a bug or whether this is a breaking change which should be very broadly communicated.

          As I referenced in my previous comment, this Jenkinsfile has worked for a long-long time, and works with 1.14. As of 1.15 this Pipeline now breaks.

          If the solution is to add `--entrypoint=""`, then could the Docker Pipeline plugin simply doi that automatically?

          R. Tyler Croy added a comment - ndeloof , I'm trying to understand more whether this is a bug or whether this is a breaking change which should be very broadly communicated. As I referenced in my previous comment, this Jenkinsfile has worked for a long-long time, and works with 1.14. As of 1.15 this Pipeline now breaks. If the solution is to add `--entrypoint=""`, then could the Docker Pipeline plugin simply doi that automatically?

          Removing support for entrypoint was a breaking change, restoring is another from some point of view. Making --entrypoint="" the default behaviour would be wrong as well.

          Nothing we can do here, docker.inside is just abusing container lifecycle. Should be deprecated imho, but being highly used by declarative we miss an alternative

           

          Nicolas De Loof added a comment - Removing support for entrypoint was a breaking change, restoring is another from some point of view. Making --entrypoint="" the default behaviour would be wrong as well. Nothing we can do here, docker.inside is just abusing container lifecycle. Should be deprecated imho, but being highly used by declarative we miss an alternative  

          R. Tyler Croy added a comment -

          ndeloof, I understand that defaulting to -entrypoint="" is problematic, but why wouldn't it be reasonable to add the behavior to try -entrypoint="" when that docker top check fails. That would be, IMHO, ideal for end users to avoid breakages and consistent usage of docker.image.inside

          R. Tyler Croy added a comment - ndeloof , I understand that defaulting to - entrypoint="" is problematic, but why wouldn't it be reasonable to add the behavior to try -entrypoint="" when that docker top check fails. That would be, IMHO, ideal for end users to avoid breakages and consistent usage of docker.image.inside

          I dislike the idea we magically disable entrypoint. If image doesn't support passing a command, should be explicit by end-user to disable entrypoint, of doing this in the background will result in some unexpected side effects and resurection for some older issues.

          but that's just my opinion, if others consider this a sane approach, feel free to follow up this way.

          Nicolas De Loof added a comment - I dislike the idea we magically disable entrypoint. If image doesn't support passing a command, should be explicit by end-user to disable entrypoint, of doing this in the background will result in some unexpected side effects and resurection for some older issues. but that's just my opinion, if others consider this a sane approach, feel free to follow up this way.

          R. Tyler Croy added a comment -

          ndeloof, I think that's a reasonable concern. Would you be willing perhaps to write for jenkins.io about entrypoint and changes for Docker Pipeline 1.15?

          My highest concern here is that a point release might introduce surprising, and seemingly breaking, behavior for the nearly 100k installations of this plugin.

          R. Tyler Croy added a comment - ndeloof , I think that's a reasonable concern. Would you be willing perhaps to write for jenkins.io about entrypoint and changes for Docker Pipeline 1.15? My highest concern here is that a point release might introduce surprising, and seemingly breaking, behavior for the nearly 100k installations of this plugin.

          Andrew Bayer added a comment -

          fwiw, "abusing container lifecycle" is, frankly, irrelevant here. What matters is that we keep from breaking users any more than absolutely necessary. I strongly agreed with going back to cmd rather than entrypoint, but there is a valid point that we may have waited too long to do so. Not to say I have the right answer, but we do need to think about this.

          Andrew Bayer added a comment - fwiw, "abusing container lifecycle" is, frankly, irrelevant here. What matters is that we keep from breaking users any more than absolutely necessary. I strongly agreed with going back to cmd rather than entrypoint, but there is a valid point that we may have waited too long to do so. Not to say I have the right answer, but we do need to think about this.

          marc young added a comment - - edited

          I'd also like to point out (aside from breaking everything at 3 of my contracts): you've based everything on "requirements" that are actually suggestions.

          The error I'm now facing in dozens of builds:

          04:11:59 ERROR: The container started but didn't run the expected command. Please double check your ENTRYPOINT does execute the command passed as docker run argument, as required by official docker images (see https://github.com/docker-library/official-images#consistency for entrypoint consistency requirements).
          

          Youve stated as required by . However those docs clearly say:

          All official images should provide a consistent interface. 
          

          Should != require. Your use-case and assumptions are not a standard to hold an entire community to

          Please remember: I'm using other peoples official containers to run in jenkins. I cannot (and will not) fork every single one of these to meet your guidelines.

          marc young added a comment - - edited I'd also like to point out (aside from breaking everything at 3 of my contracts): you've based everything on "requirements" that are actually suggestions. The error I'm now facing in dozens of builds: 04:11:59 ERROR: The container started but didn't run the expected command. Please double check your ENTRYPOINT does execute the command passed as docker run argument, as required by official docker images (see https: //github.com/docker-library/official-images#consistency for entrypoint consistency requirements). Youve stated as required by . However those docs clearly say: All official images should provide a consistent interface . Should != require. Your use-case and assumptions are not a standard to hold an entire community to Please remember: I'm using other peoples official containers to run in jenkins. I cannot (and will not) fork every single one of these to meet your guidelines.

          @myoung34 you're perfectly right, this is not a requirement but a convention on official image (it's required to get official image approved), and many custom image don't follow this.

          Our issue here is we are abusing the container lifecycle : an arbitrary docker image is not designed to run arbitrary command like we use them for.

          One option is to override entrypoint to run a custom command. But then we disable the initial entrypoint designed by image author, which in many cases is required for the image to make any sense. Typically, Selenium images do use it to run a X11 server. This option has been adopted in the past, introducing https://issues.jenkins-ci.org/browse/JENKINS-41316 regression.

          The other is to assume newcomers mostly will try docker pipeline using official images. So we offer a solution which works out-of-the box, and can't report issue with target image if detected not to match our requirements. Those who are already used with docker-pipeline then get a documented direction on how to fix their image or update their pipeline.

          I welcome any suggestion for a 3rd option. As one can't "docker exec" in a stopped container, we have to find some way for the container to run a "pause" command, and this is definitively not part of docker spec to support this.

          Nicolas De Loof added a comment - @myoung34 you're perfectly right, this is not a requirement but a convention on official image (it's required to get official image approved), and many custom image don't follow this. Our issue here is we are abusing the container lifecycle : an arbitrary docker image is  not designed to run arbitrary command like we use them for. One option is to override entrypoint to run a custom command. But then we disable the initial entrypoint designed by image author, which in many cases is required for the image to make any sense. Typically, Selenium images do use it to run a X11 server. This option has been adopted in the past, introducing https://issues.jenkins-ci.org/browse/JENKINS-41316  regression. The other is to assume newcomers mostly will try docker pipeline using official images. So we offer a solution which works out-of-the box, and can't report issue with target image if detected not to match our requirements. Those who are already used with docker-pipeline then get a documented direction on how to fix their image or update their pipeline. I welcome any suggestion for a 3rd option. As one can't " docker exec " in a stopped container, we  have to find some way for the container to run a "pause" command, and this is definitively not part of docker spec to support this.

          abayer 

          > "abusing container lifecycle" is, frankly, irrelevant here. What matters is that we keep from breaking users any more than absolutely necessary

          Sorry to say it has bee absolutely necessary. We need entrypoint support for many major use-cases, disabling them was a breaking change. It just has been restored, and introduced diagnostic code to assist end-user in fixing his pipeline/image.

          And we do abuse container lifecycle forcing the container to run a command it has not been designed for.

           

          Nicolas De Loof added a comment - abayer   > "abusing container lifecycle" is, frankly, irrelevant here. What matters is that we keep from breaking users any more than absolutely necessary Sorry to say it has bee absolutely necessary. We need entrypoint support for many major use-cases, disabling them was a breaking change. It just has been restored, and introduced diagnostic code to assist end-user in fixing his pipeline/image. And we  do abuse container lifecycle forcing the container to run a command it has not been designed for.  

          Mark Russell added a comment -

          This issue is stopping up from upgrading this plugins.  Is anyone looking at it?  Is there any chance for a solution on this?

          Mark Russell added a comment - This issue is stopping up from upgrading this plugins.  Is anyone looking at it?  Is there any chance for a solution on this?

          docker-workflow plugin comes with constraints, which unfortunately never have been clearly documented.

          Those are pretty comparable to https://github.com/knative/docs/blob/master/build/builder-contract.md

          But there's no way we support arbitrary docker images and entrypoint script. Adapt your docker images so they fit into our model

          Nicolas De Loof added a comment - docker-workflow plugin comes with constraints, which unfortunately never have been clearly documented. Those are pretty comparable to https://github.com/knative/docs/blob/master/build/builder-contract.md But there's no way we support arbitrary docker images and entrypoint script. Adapt your docker images so they fit into our model

            ndeloof Nicolas De Loof
            rtyler R. Tyler Croy
            Votes:
            13 Vote for this issue
            Watchers:
            19 Start watching this issue

              Created:
              Updated:
              Resolved: