Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-41316

docker.image('my-image').inside{...} no longer honors Dockerfile "entrypoint" since version 1.8

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Major Major
    • docker-workflow-plugin
    • None
    • docker-pipeline-plugin:1.9.1
      Jenkins 2.19.4.2

      The docker pipeline plugin used to honor the Dockerfile "entrypoint" until v1.8 in "docker.image('my-image').inside{...}". Since then, "entrypoint" is ignored.

      The capability to do selenium tests with a selenium-standalone-server running inside the Docker container fails since 1.8.

      This regression seems to be caused by JENKINS-37987 and the github commit "[FIXED JENKINS-37987] Override ENTRYPOINT, not just command, for WithContainerStep".

      This Jira seem to be similar to JENKINS-39748

      Testcase

      Code

      node ("docker") {
          docker.image('cloudbees/java-build-tools:2.0.0').inside {
      
              // verify that selenium-standalone-server has been started by the Dockerfile entrypoint /opt/bin/entry_point.sh
              sh "curl http://127.0.0.1:4444/wd/hub"
              
              // test with selenium python
              writeFile (
                  file: 'selenium_remote_web_driver_test.python', 
                  text: 
      """#!/usr/bin/env python 
      
      from selenium import webdriver
      from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
      
      driver = webdriver.Remote(
         command_executor='http://127.0.0.1:4444/wd/hub',
         desired_capabilities=DesiredCapabilities.FIREFOX)
      
      driver.get('http://python.org')
      assert 'Python' in driver.title
      """)
      
              sh "python selenium_remote_web_driver_test.python"
      
          }
      }
      

      Console

      The check "curl http://127.0.0.1:4444/wd/hub" will fail.

      Started by user admin
      [Pipeline] node
      Running on agent-1 in /home/ubuntu/jenkins-aws-home/workspace/tests/testSelenium2
      [Pipeline] {
      [Pipeline] sh
      [testSelenium2] Running shell script
      + docker inspect -f . cloudbees/java-build-tools:2.0.0
      .
      [Pipeline] withDockerContainer
      $ docker run -t -d -u 1000:1000 -w /home/ubuntu/jenkins-aws-home/workspace/tests/testSelenium2 -v /home/ubuntu/jenkins-aws-home/workspace/tests/testSelenium2:/home/ubuntu/jenkins-aws-home/workspace/tests/testSelenium2:rw -v /home/ubuntu/jenkins-aws-home/workspace/tests/testSelenium2@tmp:/home/ubuntu/jenkins-aws-home/workspace/tests/testSelenium2@tmp:rw -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** --entrypoint cat cloudbees/java-build-tools:2.0.0
      [Pipeline] {
      [Pipeline] sh
      [testSelenium2] Running shell script
      + curl http://127.0.0.1:4444/wd/hub
        % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                       Dload  Upload   Total   Spent    Left  Speed
      
        0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0curl: (7) Failed to connect to 127.0.0.1 port 4444: Connection refused
      [Pipeline] }
      $ docker stop --time=1 c65380bfd6c83d2290fc2e8fa8e5ae4cb0b84d1b21c66b4a3019c4a831f8833c
      $ docker rm -f c65380bfd6c83d2290fc2e8fa8e5ae4cb0b84d1b21c66b4a3019c4a831f8833c
      [Pipeline] // withDockerContainer
      [Pipeline] }
      [Pipeline] // node
      [Pipeline] End of Pipeline
      ERROR: script returned exit code 7
      Finished: FAILURE
      

      Workaround

      The workaround is to manually start the Dockerfile entrypoint with sh "nohup /opt/bin/entry_point.sh &".

      Code

      node ("docker") {
          docker.image('cloudbees/java-build-tools:2.0.0').inside {
              // WORKAROUND: MANUALLY START THE DOCKERFILE ENTRYPOINT
              sh "nohup /opt/bin/entry_point.sh &"
              sh "sleep 5"
              
              // verify that selenium-standalone-server has been started by the Dockerfile entrypoint /opt/bin/entry_point.sh
              sh "curl http://127.0.0.1:4444/wd/hub"
              
              // test with selenium python
              writeFile (
                  file: 'selenium_remote_web_driver_test.python', 
                  text: 
      """#!/usr/bin/env python 
      
      from selenium import webdriver
      from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
      
      driver = webdriver.Remote(
         command_executor='http://127.0.0.1:4444/wd/hub',
         desired_capabilities=DesiredCapabilities.FIREFOX)
      
      driver.get('http://python.org')
      assert 'Python' in driver.title
      """)
      
              sh "python selenium_remote_web_driver_test.python"
      
          }
      }
      

      Console

      Started by user admin
      [Pipeline] node
      Running on agent-1 in /home/ubuntu/jenkins-aws-home/workspace/tests/testSelenium2
      [Pipeline] {
      [Pipeline] sh
      [testSelenium2] Running shell script
      + docker inspect -f . cloudbees/java-build-tools:2.0.0
      .
      [Pipeline] withDockerContainer
      $ docker run -t -d -u 1000:1000 -w /home/ubuntu/jenkins-aws-home/workspace/tests/testSelenium2 -v /home/ubuntu/jenkins-aws-home/workspace/tests/testSelenium2:/home/ubuntu/jenkins-aws-home/workspace/tests/testSelenium2:rw -v /home/ubuntu/jenkins-aws-home/workspace/tests/testSelenium2@tmp:/home/ubuntu/jenkins-aws-home/workspace/tests/testSelenium2@tmp:rw -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** --entrypoint cat cloudbees/java-build-tools:2.0.0
      [Pipeline] {
      [Pipeline] sh
      [testSelenium2] Running shell script
      + nohup /opt/bin/entry_point.sh
      [Pipeline] sh
      [testSelenium2] Running shell script
      + sleep 5
      [Pipeline] sh
      [testSelenium2] Running shell script
      + curl http://127.0.0.1:4444/wd/hub
        % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                       Dload  Upload   Total   Spent    Left  Speed
      
        0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
        0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
        0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
      [Pipeline] writeFile
      [Pipeline] sh
      [testSelenium2] Running shell script
      + python selenium_remote_web_driver_test.python
      [Pipeline] }
      $ docker stop --time=1 804a1f9cac0e8040b5e882a7c3ebd052df53e9cb99b34c0a7ffba4d0abff5401
      $ docker rm -f 804a1f9cac0e8040b5e882a7c3ebd052df53e9cb99b34c0a7ffba4d0abff5401
      [Pipeline] // withDockerContainer
      [Pipeline] }
      [Pipeline] // node
      [Pipeline] End of Pipeline
      Finished: SUCCESS
      

          [JENKINS-41316] docker.image('my-image').inside{...} no longer honors Dockerfile "entrypoint" since version 1.8

          Ryan Campbell added a comment -

          It's not clear to me why your usecase is driving you to need docker.inside{}. When I use the selenium image, the withRun approach works great.

          Ryan Campbell added a comment - It's not clear to me why your usecase is driving you to need docker.inside{}. When I use the selenium image, the withRun approach works great .

          recampbell I took the flow of "Jenkins, The Definitive guide" and tried to make it work with as few differences as possible between:

          • A 'classic' linux build agent with Firefox that works with Java Selenium test frameworks
          • A vanilla linux build agent using docker-pipeline to customize the build environment and get the desired JDK, Maven & Firefox with Firefox working with Java Selenium test frameworks
          • A Docker based cloud agent (Kubernetes Agents, Amazon ECS Agents... ) so that the entire build runs in a Docker container that brings all the customization (JDK, Maven & Firefox)

          Here is what I have succeeded to implement until docker-pipeline 1.8. The only trick was to switch the Selenium driver from Firefox to Remote+Firefox to use XVFB.

          The pipeline below is almost the same on classic linux, docker-pipeline and docker based cloud agents. Since docker-pipeline:1.8, I need to add something like "nohup /opt/bin/entry_point.sh &".

          node ('docker'){
          
              docker.image('cloudbees/java-build-tools:2.0.0').inside {
                  git 'https://github.com/cyrille-leclerc/game-of-life.git'
                  stage 'Build Web App'
                  withMaven(mavenSettingsConfig: 'maven-settings-for-gameoflife') {
          
                      sh "mvn clean package"
                      step([$class: 'ArtifactArchiver', artifacts: 'gameoflife-web/target/*.war'])
                  }
              }
          
              docker.image('cloudbees/java-build-tools:2.0.0').inside {
                withMaven(
                        mavenSettingsConfig: 'maven-settings-for-gameoflife',
                        mavenLocalRepo: '.repository') {
                     
                      sh """
                         cd gameoflife-acceptance-tests
                         mvn verify -Dwebdriver.driver=remote -Dwebdriver.remote.driver=firefox -Dwebdriver.remote.url=http://localhost:4444/wd/hub -Dwebdriver.base.url=http://...
                      """
                  }
              }
          }
          

          Cyrille Le Clerc added a comment - recampbell I took the flow of " Jenkins, The Definitive guide " and tried to make it work with as few differences as possible between: A 'classic' linux build agent with Firefox that works with Java Selenium test frameworks A vanilla linux build agent using docker-pipeline to customize the build environment and get the desired JDK, Maven & Firefox with Firefox working with Java Selenium test frameworks A Docker based cloud agent ( Kubernetes Agents , Amazon ECS Agents ... ) so that the entire build runs in a Docker container that brings all the customization (JDK, Maven & Firefox) Here is what I have succeeded to implement until docker-pipeline 1.8. The only trick was to switch the Selenium driver from Firefox to Remote+Firefox to use XVFB. The pipeline below is almost the same on classic linux, docker-pipeline and docker based cloud agents. Since docker-pipeline:1.8, I need to add something like " nohup /opt/bin/entry_point.sh & ". node ( 'docker' ){ docker.image( 'cloudbees/java-build-tools:2.0.0' ).inside { git 'https: //github.com/cyrille-leclerc/game-of-life.git' stage 'Build Web App' withMaven(mavenSettingsConfig: 'maven-settings- for -gameoflife' ) { sh "mvn clean package " step([$class: 'ArtifactArchiver' , artifacts: 'gameoflife-web/target/*.war' ]) } } docker.image( 'cloudbees/java-build-tools:2.0.0' ).inside { withMaven( mavenSettingsConfig: 'maven-settings- for -gameoflife' , mavenLocalRepo: '.repository' ) { sh """ cd gameoflife-acceptance-tests mvn verify -Dwebdriver.driver=remote -Dwebdriver.remote.driver=firefox -Dwebdriver.remote.url=http: //localhost:4444/wd/hub -Dwebdriver.base.url=http://... """ } } }

          Andrew Bayer added a comment -

          I've put a PR up at https://github.com/jenkinsci/docker-workflow-plugin/pull/85 - it reverts .inside behavior to pre-JENKINS-37987 behavior, while adding a new .overrideEntrypoint method that works identically to how .inside worked post-JENKINS-37987.

          Andrew Bayer added a comment - I've put a PR up at https://github.com/jenkinsci/docker-workflow-plugin/pull/85 - it reverts .inside behavior to pre- JENKINS-37987 behavior, while adding a new .overrideEntrypoint method that works identically to how .inside worked post- JENKINS-37987 .

          abayer thanks!

          Cyrille Le Clerc added a comment - abayer thanks!

          Mike Kobit added a comment -

          Any progress on this? Seems like this is related to https://issues.jenkins-ci.org/browse/JENKINS-38438 and https://issues.jenkins-ci.org/browse/JENKINS-39748 as well.

          Mike Kobit added a comment - Any progress on this? Seems like this is related to https://issues.jenkins-ci.org/browse/JENKINS-38438  and https://issues.jenkins-ci.org/browse/JENKINS-39748  as well.

          abayer shall we re-open this issue and mark it as in-progress as docker-workflow#116 seem to work at solving this regression.

          Cyrille Le Clerc added a comment - abayer shall we re-open this issue and mark it as in-progress as docker-workflow#116 seem to work at solving this regression.

          Nicolas De Loof added a comment - fixed by  https://github.com/jenkinsci/docker-workflow-plugin/pull/116  

          In my environment, your latest pull request doesn't work. The cat process is never found.

          The reason is that the docker top command give more result that the 4 fields you are waiting in the code. In my env, I've more fields :

          UID    PID   PPID  C   STIME   TTY    TIME      CMD
          smadja 6203  6187  1   14:42   pts/0  00:00:00  cat

           

          I've done a pull request with a small change to force the returned fields on top command :

          https://github.com/jenkinsci/docker-workflow-plugin/pull/132

          I've tested the code with master branch of docker-workflow-plugin and the simple pipeline :

           

          node("master"){
          
              docker.image("alpine:latest").pull()
              docker.image("alpine:latest").inside() {
                  sh "ps aux"
              }
          }
          

           

          My docker version is :

          
          

          [14:09] smadja@orion:jenkins $ docker info
          Containers: 58
          Running: 6
          Paused: 0
          Stopped: 52
          Images: 271
          Server Version: 18.01.0-ce
          Storage Driver: overlay2
          Backing Filesystem: extfs
          Supports d_type: true
          Native Overlay Diff: true
          Logging Driver: json-file
          Cgroup Driver: cgroupfs
          Plugins:
          Volume: local
          Network: bridge host macvlan null overlay
          Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
          Swarm: inactive
          Runtimes: runc
          Default Runtime: runc
          Init Binary: docker-init
          containerd version: 89623f28b87a6004d4b785663257362d1658a729
          runc version: b2567b37d7b75eb4cf325b77297b140ea686ce8f
          init version: 949e6fa
          Security Options:
          seccomp
          Profile: default
          Kernel Version: 4.14.14-300.fc27.x86_64
          Operating System: Fedora 27 (Workstation Edition)
          OSType: linux
          Architecture: x86_64
          CPUs: 8
          Total Memory: 31.35GiB
          Name: orion.jalios.local
          ID: YWSZ:IYWP:LXRK:73OO:EXUG:LCWY:WBLJ:N5XP:QUUO:OLHQ:EJIT:Z5DN
          Docker Root Dir: /home/docker-data
          Debug Mode (client): false
          Debug Mode (server): false
          Registry: https://index.docker.io/v1/
          Labels:
          Experimental: false
          Insecure Registries:
          127.0.0.0/8
          Live Restore Enabled: false[14:40] smadja@orion:jenkins ${code}
           

          ludovic SMADJA added a comment - In my environment, your latest pull request doesn't work. The cat process is never found. The reason is that the docker top command give more result that the 4 fields you are waiting in the code. In my env, I've more fields : UID PID PPID C STIME TTY TIME CMD smadja 6203 6187 1 14:42 pts/0 00:00:00 cat   I've done a pull request with a small change to force the returned fields on top command : https://github.com/jenkinsci/docker-workflow-plugin/pull/132 I've tested the code with master branch of docker-workflow-plugin and the simple pipeline :   node( "master" ){     docker.image( "alpine:latest" ).pull()     docker.image( "alpine:latest" ).inside() {         sh "ps aux"     } }   My docker version is : [14:09] smadja@orion:jenkins $ docker info Containers: 58 Running: 6 Paused: 0 Stopped: 52 Images: 271 Server Version: 18.01.0-ce Storage Driver: overlay2 Backing Filesystem: extfs Supports d_type: true Native Overlay Diff: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: 89623f28b87a6004d4b785663257362d1658a729 runc version: b2567b37d7b75eb4cf325b77297b140ea686ce8f init version: 949e6fa Security Options: seccomp Profile: default Kernel Version: 4.14.14-300.fc27.x86_64 Operating System: Fedora 27 (Workstation Edition) OSType: linux Architecture: x86_64 CPUs: 8 Total Memory: 31.35GiB Name: orion.jalios.local ID: YWSZ:IYWP:LXRK:73OO:EXUG:LCWY:WBLJ:N5XP:QUUO:OLHQ:EJIT:Z5DN Docker Root Dir: /home/docker-data Debug Mode (client): false Debug Mode (server): false Registry: https://index.docker.io/v1/ Labels: Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false [14:40] smadja@orion:jenkins ${code}  

          Nicolas De Loof added a comment - - edited

          please open separate issue to report regression (can make reference to this one)

          => https://issues.jenkins-ci.org/browse/JENKINS-49278

          Nicolas De Loof added a comment - - edited please open separate issue to report regression (can make reference to this one) =>  https://issues.jenkins-ci.org/browse/JENKINS-49278

          Sean Glover added a comment -

          I was able to downgrade the Docker Pipeline plugin manually to 1.14 from 1.15 and my builds seem to work alright.

          Sean Glover added a comment - I was able to downgrade the Docker Pipeline plugin manually to 1.14 from 1.15 and my builds seem to work alright.

          For those who are experiencing issues (e.g. jayv seglo marbon myoung34 marcphilipp): Can you confirm that all your build images use custom entry points?

          Hendrik Halkow added a comment - For those who are experiencing issues (e.g. jayv seglo marbon myoung34 marcphilipp ): Can you confirm that all your build images use custom entry points?

          Sean Glover added a comment -

          My image defines an entrypoint.

          Sean Glover added a comment - My image defines an entrypoint.

          marc young added a comment -

          Mine does as well (hashicorp/terraform:latest)

          marc young added a comment - Mine does as well (hashicorp/terraform:latest)

          I run a handful containers, only one has an entrypoint and the pipeline did fail in the stage of that one using 1.15. I have downgraded since to 1.14, so I can't verify at this point.

          Jo Voordeckers added a comment - I run a handful containers, only one has an entrypoint and the pipeline did fail in the stage of that one using 1.15. I have downgraded since to 1.14, so I can't verify at this point.

          Code changed in jenkins
          User: Nicolas De Loof
          Path:
          src/main/java/org/jenkinsci/plugins/docker/workflow/WithContainerStep.java
          http://jenkins-ci.org/commit/docker-workflow-plugin/ff01f98cabff1db52f5e9c257e48e79d1d3cbf57
          Log:
          better diagnostic (JENKINS-41316)

          SCM/JIRA link daemon added a comment - Code changed in jenkins User: Nicolas De Loof Path: src/main/java/org/jenkinsci/plugins/docker/workflow/WithContainerStep.java http://jenkins-ci.org/commit/docker-workflow-plugin/ff01f98cabff1db52f5e9c257e48e79d1d3cbf57 Log: better diagnostic ( JENKINS-41316 )

          Marc Philipp added a comment - - edited

          hendrikhalkow The image we're using does not use a custom entrypoint.

          We use it in a build stage like this:

          stage('Build & Test') {
              agent {
                  docker {
                      image 'custom-image'
                      args '--user=root'
                      reuseNode true
                  }
              }
              steps {
                  sh 'cd extensions && make reset clean upgrade test package'
              }
              post {
                  always {
                      junit 'extensions/_build/test/*.xml'
                  }
              }
          }

          This was the output of the stage from the failing build:

          [Pipeline] stage
          [Pipeline] { (Build & Test)
          [Pipeline] getContext
          [Pipeline] sh
          [ice_build_master-GJXUN3EV4ANWC6X4LMPJ5JABSJMJOA7TXCRSWSJIU5TBSSGDUYAA] Running shell script
          + docker inspect -f . artifactory.acme.com:5000/custom-image:latest
          
          Error: No such object: artifactory.acme.com:5000/custom-image:latest
          [Pipeline] sh
          [ice_build_master-GJXUN3EV4ANWC6X4LMPJ5JABSJMJOA7TXCRSWSJIU5TBSSGDUYAA] Running shell script
          + docker pull artifactory.acme.com:5000/custom-image:latest
          latest: Pulling from custom-image
          a3ed95caeb02: Pulling fs layer
          1534505fcbc6: Pulling fs layer
          0b4b748d3628: Pulling fs layer
          8e9a47e0752c: Pulling fs layer
          81cf376579c5: Pulling fs layer
          0b4b748d3628: Waiting
          8e9a47e0752c: Waiting
          81cf376579c5: Waiting
          a3ed95caeb02: Verifying Checksum
          a3ed95caeb02: Download complete
          1534505fcbc6: Verifying Checksum
          1534505fcbc6: Download complete
          8e9a47e0752c: Download complete
          81cf376579c5: Verifying Checksum
          81cf376579c5: Download complete
          a3ed95caeb02: Pull complete
          0b4b748d3628: Verifying Checksum
          0b4b748d3628: Download complete
          1534505fcbc6: Pull complete
          0b4b748d3628: Pull complete
          8e9a47e0752c: Pull complete
          81cf376579c5: Pull complete
          Digest: sha256:6b8d0cefe22c82d3f58dd785a8910990a5cb0ce51a9fa609e665967691cad601
          Status: Downloaded newer image for artifactory.acme.com:5000/custom-image:latest
          [Pipeline] withDockerContainer
          j1-rhel6-T2Medium (i-0a8d8923cdbf1d95b) does not seem to be running inside a container
          $ docker run -t -d -u 500:500 --user=root -w /home/ec2-user/workspace/ice_build_master-GJXUN3EV4ANWC6X4LMPJ5JABSJMJOA7TXCRSWSJIU5TBSSGDUYAA -v /home/ec2-user/workspace/ice_build_master-GJXUN3EV4ANWC6X4LMPJ5JABSJMJOA7TXCRSWSJIU5TBSSGDUYAA:/home/ec2-user/workspace/ice_build_master-GJXUN3EV4ANWC6X4LMPJ5JABSJMJOA7TXCRSWSJIU5TBSSGDUYAA:rw,z -v /home/ec2-user/workspace/ice_build_master-GJXUN3EV4ANWC6X4LMPJ5JABSJMJOA7TXCRSWSJIU5TBSSGDUYAA@tmp:/home/ec2-user/workspace/ice_build_master-GJXUN3EV4ANWC6X4LMPJ5JABSJMJOA7TXCRSWSJIU5TBSSGDUYAA@tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** artifactory.acme.com:5000/custom-image:latest cat
          $ docker top 3ae2a0fa20e8d46e1389cdf6949304249583bed62858c55e89196cee49888870
          ERROR: The container started but didn't run the expected command. Please double check your ENTRYPOINT does execute the command passed as docker run argument. See https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#entrypoint for entrypoint best practices.
          [Pipeline] {
          [Pipeline] sh
          Post stage
          [ice_build_master-GJXUN3EV4ANWC6X4LMPJ5JABSJMJOA7TXCRSWSJIU5TBSSGDUYAA] Running shell script
          + cd extensions
          /home/ec2-user/workspace/ice_build_master-GJXUN3EV4ANWC6X4LMPJ5JABSJMJOA7TXCRSWSJIU5TBSSGDUYAA@tmp/durable-14298997/script.sh: line 2: cd: extensions: No such file or directory
          [Pipeline] junit
          Recording test results
          [Pipeline] }
          $ docker stop --time=1 3ae2a0fa20e8d46e1389cdf6949304249583bed62858c55e89196cee49888870
          $ docker rm -f 3ae2a0fa20e8d46e1389cdf6949304249583bed62858c55e89196cee49888870
          [Pipeline] // withDockerContainer
          [Pipeline] }
          [Pipeline] // stage

          So, you are right, it's only a warning. However, the subsequent step fails because it looks like it's not executed in the container but on the Jenkins host.

          Marc Philipp added a comment - - edited hendrikhalkow The image we're using does not use a custom entrypoint. We use it in a build stage like this: stage( 'Build & Test' ) {     agent {         docker {             image 'custom-image'             args '--user=root'             reuseNode true         }     }     steps {         sh 'cd extensions && make reset clean upgrade test package '     }     post {         always {             junit 'extensions/_build/test/*.xml'         }     } } This was the output of the stage from the failing build: [Pipeline] stage [Pipeline] { (Build & Test) [Pipeline] getContext [Pipeline] sh [ice_build_master-GJXUN3EV4ANWC6X4LMPJ5JABSJMJOA7TXCRSWSJIU5TBSSGDUYAA] Running shell script + docker inspect -f . artifactory.acme.com:5000/custom-image:latest Error: No such object: artifactory.acme.com:5000/custom-image:latest [Pipeline] sh [ice_build_master-GJXUN3EV4ANWC6X4LMPJ5JABSJMJOA7TXCRSWSJIU5TBSSGDUYAA] Running shell script + docker pull artifactory.acme.com:5000/custom-image:latest latest: Pulling from custom-image a3ed95caeb02: Pulling fs layer 1534505fcbc6: Pulling fs layer 0b4b748d3628: Pulling fs layer 8e9a47e0752c: Pulling fs layer 81cf376579c5: Pulling fs layer 0b4b748d3628: Waiting 8e9a47e0752c: Waiting 81cf376579c5: Waiting a3ed95caeb02: Verifying Checksum a3ed95caeb02: Download complete 1534505fcbc6: Verifying Checksum 1534505fcbc6: Download complete 8e9a47e0752c: Download complete 81cf376579c5: Verifying Checksum 81cf376579c5: Download complete a3ed95caeb02: Pull complete 0b4b748d3628: Verifying Checksum 0b4b748d3628: Download complete 1534505fcbc6: Pull complete 0b4b748d3628: Pull complete 8e9a47e0752c: Pull complete 81cf376579c5: Pull complete Digest: sha256:6b8d0cefe22c82d3f58dd785a8910990a5cb0ce51a9fa609e665967691cad601 Status: Downloaded newer image for artifactory.acme.com:5000/custom-image:latest [Pipeline] withDockerContainer j1-rhel6-T2Medium (i-0a8d8923cdbf1d95b) does not seem to be running inside a container $ docker run -t -d -u 500:500 --user=root -w /home/ec2-user/workspace/ice_build_master-GJXUN3EV4ANWC6X4LMPJ5JABSJMJOA7TXCRSWSJIU5TBSSGDUYAA -v /home/ec2-user/workspace/ice_build_master-GJXUN3EV4ANWC6X4LMPJ5JABSJMJOA7TXCRSWSJIU5TBSSGDUYAA:/home/ec2-user/workspace/ice_build_master-GJXUN3EV4ANWC6X4LMPJ5JABSJMJOA7TXCRSWSJIU5TBSSGDUYAA:rw,z -v /home/ec2-user/workspace/ice_build_master-GJXUN3EV4ANWC6X4LMPJ5JABSJMJOA7TXCRSWSJIU5TBSSGDUYAA@tmp:/home/ec2-user/workspace/ice_build_master-GJXUN3EV4ANWC6X4LMPJ5JABSJMJOA7TXCRSWSJIU5TBSSGDUYAA@tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** artifactory.acme.com:5000/custom-image:latest cat $ docker top 3ae2a0fa20e8d46e1389cdf6949304249583bed62858c55e89196cee49888870 ERROR: The container started but didn't run the expected command. Please double check your ENTRYPOINT does execute the command passed as docker run argument. See https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#entrypoint for entrypoint best practices. [Pipeline] { [Pipeline] sh Post stage [ice_build_master-GJXUN3EV4ANWC6X4LMPJ5JABSJMJOA7TXCRSWSJIU5TBSSGDUYAA] Running shell script + cd extensions /home/ec2-user/workspace/ice_build_master-GJXUN3EV4ANWC6X4LMPJ5JABSJMJOA7TXCRSWSJIU5TBSSGDUYAA@tmp/durable-14298997/script.sh: line 2: cd: extensions: No such file or directory [Pipeline] junit Recording test results [Pipeline] } $ docker stop --time=1 3ae2a0fa20e8d46e1389cdf6949304249583bed62858c55e89196cee49888870 $ docker rm -f 3ae2a0fa20e8d46e1389cdf6949304249583bed62858c55e89196cee49888870 [Pipeline] // withDockerContainer [Pipeline] } [Pipeline] // stage So, you are right, it's only a warning. However, the subsequent step fails because it looks like it's not executed in the container but on the Jenkins host.

          Santiago Mola added a comment -

          It seems the fix to this was a regression from 1.14 to 1.15. Our Docker pipelines are now failing and we had to downgrade to 1.14.

          Santiago Mola added a comment - It seems the fix to this was a regression from 1.14 to 1.15. Our Docker pipelines are now failing and we had to downgrade to 1.14.

          Looks like there is a `cat` command hardcoded somewhere.

          This is the log of execution using 1.14:

          09:19:06 $ docker run -t -d -u 1000:1000 -w /vol/jenkins-slave/workspace/teste -v /vol/jenkins-slave/workspace/teste:/vol/jenkins-slave/workspace/teste:rw,z -v /vol/jenkins-slave/workspace/teste@tmp:/vol/jenkins-slave/workspace/teste@tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** --entrypoint cat alpine/git
          

          And this is the log of same job, using 1.15:

          09:17:32 $ docker run -t -d -u 1000:1000 -w /vol/jenkins-slave/workspace/teste -v /vol/jenkins-slave/workspace/teste:/vol/jenkins-slave/workspace/teste:rw,z -v /vol/jenkins-slave/workspace/teste@tmp:/vol/jenkins-slave/workspace/teste@tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** alpine/git cat
          

          look at the end of log, there  is a `cat` command there, and this is not in the entrypoint of image.

           

          Alexandre Silveira added a comment - Looks like there is a `cat` command hardcoded somewhere. This is the log of execution using 1.14: 09:19:06 $ docker run -t -d -u 1000:1000 -w /vol/jenkins-slave/workspace/teste -v /vol/jenkins-slave/workspace/teste:/vol/jenkins-slave/workspace/teste:rw,z -v /vol/jenkins-slave/workspace/teste@tmp:/vol/jenkins-slave/workspace/teste@tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** --entrypoint cat alpine/git And this is the log of same job, using 1.15: 09:17:32 $ docker run -t -d -u 1000:1000 -w /vol/jenkins-slave/workspace/teste -v /vol/jenkins-slave/workspace/teste:/vol/jenkins-slave/workspace/teste:rw,z -v /vol/jenkins-slave/workspace/teste@tmp:/vol/jenkins-slave/workspace/teste@tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** alpine/git cat look at the end of log, there  is a `cat` command there, and this is not in the entrypoint of image.  

          Chisel Wright added a comment -

          We've had to downgrade to 1.14 for the same reason:

          ... --entrypoint cat in.house/thingy:31

          became

          ... in.house/thingy:31 cat

          and docker steps started failing

          Chisel Wright added a comment - We've had to downgrade to 1.14 for the same reason: ... --entrypoint cat in.house/thingy:31 became ... in.house/thingy:31 cat and docker steps started failing

          Leo Luz added a comment - - edited

          We are using 1.15 in our company and we are having the same issue. My docker image has an entrypoint defined and image.inside() invokes docker as:

          docker run -t -d -u 1000:1000 -w /var/jenkins/workspace/....myimage:mytag cat

          The problem is that cat is not a recognized flag for my entrypoint and it fails in the next command:

          $ docker top 52da7791facd2487ad299b815207b539f7c0f54ec0c53c66337b25f81a6c2bb4

          with the error:

          java.io.IOException: Failed to run top '52da7791facd2487ad299b815207b539f7c0f54ec0c53c66337b25f81a6c2bb4'. Error: Error response from daemon: Container 52da7791facd2487ad299b815207b539f7c0f54ec0c53c66337b25f81a6c2bb4 is not running
           at org.jenkinsci.plugins.docker.workflow.client.DockerClient.listProcess(DockerClient.java:140)
           at org.jenkinsci.plugins.docker.workflow.WithContainerStep$Execution.start(WithContainerStep.java:185)
           at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:229)
           at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:153)
           at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:108)
           at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:48)
           at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
           at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
           at com.cloudbees.groovy.cps.sandbox.DefaultInvoker.methodCall(DefaultInvoker.java:19)

          As a workaround Im doing in my Jenkinsfile:

          script {↩
              image.inside('--entrypoint ""') {↩
                  // do something
              }↩
          }↩

          This fixes my issue because I don't need the entrypoint definition at this stage but it would be nice to have the plugin working as expected.

          Thank you!

          Leo Luz added a comment - - edited We are using 1.15 in our company and we are having the same issue. My docker image has an entrypoint defined and image.inside() invokes docker as: docker run -t -d -u 1000:1000 -w /var/jenkins/workspace/....myimage:mytag cat The problem is that cat is not a recognized flag for my entrypoint and it fails in the next command: $ docker top 52da7791facd2487ad299b815207b539f7c0f54ec0c53c66337b25f81a6c2bb4 with the error: java.io.IOException: Failed to run top '52da7791facd2487ad299b815207b539f7c0f54ec0c53c66337b25f81a6c2bb4'. Error: Error response from daemon: Container 52da7791facd2487ad299b815207b539f7c0f54ec0c53c66337b25f81a6c2bb4 is not running at org.jenkinsci.plugins.docker.workflow.client.DockerClient.listProcess(DockerClient.java:140) at org.jenkinsci.plugins.docker.workflow.WithContainerStep$Execution.start(WithContainerStep.java:185) at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:229) at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:153) at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:108) at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:48) at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113) at com.cloudbees.groovy.cps.sandbox.DefaultInvoker.methodCall(DefaultInvoker.java:19) As a workaround Im doing in my Jenkinsfile: script {↩ image.inside('--entrypoint ""') {↩ // do something }↩ }↩ This fixes my issue because I don't need the entrypoint  definition at this stage but it would be nice to have the plugin working as expected. Thank you!

          The 1.15 release really screwed up all of our builds involving containers. Reverted to 1.14.

          Hans Kristian Flaatten added a comment - The 1.15 release really screwed up all of our builds involving containers. Reverted to 1.14.

          Nik Reiman added a comment -

          Ditto here, 1.15 has caused a bunch of our container-related builds to fail. Looking at the diff of https://github.com/jenkinsci/docker-workflow-plugin/pull/116/files, it seems that the "detection" is looking for `cat` commands, which we are not using within `docker.image.inside`.

          Sadly the `–entrypoint ""` workaround also does not work in our case and we have reverted to 1.14 for now. Is there another JIRA issue already to track this regression, or should this one be re-opened?

          Nik Reiman added a comment - Ditto here, 1.15 has caused a bunch of our container-related builds to fail. Looking at the diff of https://github.com/jenkinsci/docker-workflow-plugin/pull/116/files,  it seems that the "detection" is looking for `cat` commands, which we are not using within `docker.image.inside`. Sadly the `–entrypoint ""` workaround also does not work in our case and we have reverted to 1.14 for now. Is there another JIRA issue already to track this regression, or should this one be re-opened?

          Please don't re-open this issue, regression has been caught already and addressed in JENKINS-49278

          Nicolas De Loof added a comment - Please don't re-open this issue, regression has been caught already and addressed in  JENKINS-49278

          sachin gupta added a comment -

          I'm still facing the same issue with Jenkins version 2.161,

          I have search many issues opened and closed without any proper resolution to this problem, please provide the suitable resolution

          sachin gupta added a comment - I'm still facing the same issue with Jenkins version 2.161, I have search many issues opened and closed without any proper resolution to this problem, please provide the suitable resolution

          Ben Faucher added a comment -

          I just ran into this issue. Is this going to be addressed? Why is the entrypoint overridden in the first place?

          Ben Faucher added a comment - I just ran into this issue. Is this going to be addressed? Why is the entrypoint overridden in the first place?

            Unassigned Unassigned
            cleclerc Cyrille Le Clerc
            Votes:
            8 Vote for this issue
            Watchers:
            32 Start watching this issue

              Created:
              Updated: