Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-41316

docker.image('my-image').inside{...} no longer honors Dockerfile "entrypoint" since version 1.8

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Major Major
    • docker-workflow-plugin
    • None
    • docker-pipeline-plugin:1.9.1
      Jenkins 2.19.4.2

      The docker pipeline plugin used to honor the Dockerfile "entrypoint" until v1.8 in "docker.image('my-image').inside{...}". Since then, "entrypoint" is ignored.

      The capability to do selenium tests with a selenium-standalone-server running inside the Docker container fails since 1.8.

      This regression seems to be caused by JENKINS-37987 and the github commit "[FIXED JENKINS-37987] Override ENTRYPOINT, not just command, for WithContainerStep".

      This Jira seem to be similar to JENKINS-39748

      Testcase

      Code

      node ("docker") {
          docker.image('cloudbees/java-build-tools:2.0.0').inside {
      
              // verify that selenium-standalone-server has been started by the Dockerfile entrypoint /opt/bin/entry_point.sh
              sh "curl http://127.0.0.1:4444/wd/hub"
              
              // test with selenium python
              writeFile (
                  file: 'selenium_remote_web_driver_test.python', 
                  text: 
      """#!/usr/bin/env python 
      
      from selenium import webdriver
      from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
      
      driver = webdriver.Remote(
         command_executor='http://127.0.0.1:4444/wd/hub',
         desired_capabilities=DesiredCapabilities.FIREFOX)
      
      driver.get('http://python.org')
      assert 'Python' in driver.title
      """)
      
              sh "python selenium_remote_web_driver_test.python"
      
          }
      }
      

      Console

      The check "curl http://127.0.0.1:4444/wd/hub" will fail.

      Started by user admin
      [Pipeline] node
      Running on agent-1 in /home/ubuntu/jenkins-aws-home/workspace/tests/testSelenium2
      [Pipeline] {
      [Pipeline] sh
      [testSelenium2] Running shell script
      + docker inspect -f . cloudbees/java-build-tools:2.0.0
      .
      [Pipeline] withDockerContainer
      $ docker run -t -d -u 1000:1000 -w /home/ubuntu/jenkins-aws-home/workspace/tests/testSelenium2 -v /home/ubuntu/jenkins-aws-home/workspace/tests/testSelenium2:/home/ubuntu/jenkins-aws-home/workspace/tests/testSelenium2:rw -v /home/ubuntu/jenkins-aws-home/workspace/tests/testSelenium2@tmp:/home/ubuntu/jenkins-aws-home/workspace/tests/testSelenium2@tmp:rw -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** --entrypoint cat cloudbees/java-build-tools:2.0.0
      [Pipeline] {
      [Pipeline] sh
      [testSelenium2] Running shell script
      + curl http://127.0.0.1:4444/wd/hub
        % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                       Dload  Upload   Total   Spent    Left  Speed
      
        0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0curl: (7) Failed to connect to 127.0.0.1 port 4444: Connection refused
      [Pipeline] }
      $ docker stop --time=1 c65380bfd6c83d2290fc2e8fa8e5ae4cb0b84d1b21c66b4a3019c4a831f8833c
      $ docker rm -f c65380bfd6c83d2290fc2e8fa8e5ae4cb0b84d1b21c66b4a3019c4a831f8833c
      [Pipeline] // withDockerContainer
      [Pipeline] }
      [Pipeline] // node
      [Pipeline] End of Pipeline
      ERROR: script returned exit code 7
      Finished: FAILURE
      

      Workaround

      The workaround is to manually start the Dockerfile entrypoint with sh "nohup /opt/bin/entry_point.sh &".

      Code

      node ("docker") {
          docker.image('cloudbees/java-build-tools:2.0.0').inside {
              // WORKAROUND: MANUALLY START THE DOCKERFILE ENTRYPOINT
              sh "nohup /opt/bin/entry_point.sh &"
              sh "sleep 5"
              
              // verify that selenium-standalone-server has been started by the Dockerfile entrypoint /opt/bin/entry_point.sh
              sh "curl http://127.0.0.1:4444/wd/hub"
              
              // test with selenium python
              writeFile (
                  file: 'selenium_remote_web_driver_test.python', 
                  text: 
      """#!/usr/bin/env python 
      
      from selenium import webdriver
      from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
      
      driver = webdriver.Remote(
         command_executor='http://127.0.0.1:4444/wd/hub',
         desired_capabilities=DesiredCapabilities.FIREFOX)
      
      driver.get('http://python.org')
      assert 'Python' in driver.title
      """)
      
              sh "python selenium_remote_web_driver_test.python"
      
          }
      }
      

      Console

      Started by user admin
      [Pipeline] node
      Running on agent-1 in /home/ubuntu/jenkins-aws-home/workspace/tests/testSelenium2
      [Pipeline] {
      [Pipeline] sh
      [testSelenium2] Running shell script
      + docker inspect -f . cloudbees/java-build-tools:2.0.0
      .
      [Pipeline] withDockerContainer
      $ docker run -t -d -u 1000:1000 -w /home/ubuntu/jenkins-aws-home/workspace/tests/testSelenium2 -v /home/ubuntu/jenkins-aws-home/workspace/tests/testSelenium2:/home/ubuntu/jenkins-aws-home/workspace/tests/testSelenium2:rw -v /home/ubuntu/jenkins-aws-home/workspace/tests/testSelenium2@tmp:/home/ubuntu/jenkins-aws-home/workspace/tests/testSelenium2@tmp:rw -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** --entrypoint cat cloudbees/java-build-tools:2.0.0
      [Pipeline] {
      [Pipeline] sh
      [testSelenium2] Running shell script
      + nohup /opt/bin/entry_point.sh
      [Pipeline] sh
      [testSelenium2] Running shell script
      + sleep 5
      [Pipeline] sh
      [testSelenium2] Running shell script
      + curl http://127.0.0.1:4444/wd/hub
        % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                       Dload  Upload   Total   Spent    Left  Speed
      
        0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
        0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
        0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
      [Pipeline] writeFile
      [Pipeline] sh
      [testSelenium2] Running shell script
      + python selenium_remote_web_driver_test.python
      [Pipeline] }
      $ docker stop --time=1 804a1f9cac0e8040b5e882a7c3ebd052df53e9cb99b34c0a7ffba4d0abff5401
      $ docker rm -f 804a1f9cac0e8040b5e882a7c3ebd052df53e9cb99b34c0a7ffba4d0abff5401
      [Pipeline] // withDockerContainer
      [Pipeline] }
      [Pipeline] // node
      [Pipeline] End of Pipeline
      Finished: SUCCESS
      

          [JENKINS-41316] docker.image('my-image').inside{...} no longer honors Dockerfile "entrypoint" since version 1.8

          Marc Philipp added a comment - - edited

          hendrikhalkow The image we're using does not use a custom entrypoint.

          We use it in a build stage like this:

          stage('Build & Test') {
              agent {
                  docker {
                      image 'custom-image'
                      args '--user=root'
                      reuseNode true
                  }
              }
              steps {
                  sh 'cd extensions && make reset clean upgrade test package'
              }
              post {
                  always {
                      junit 'extensions/_build/test/*.xml'
                  }
              }
          }

          This was the output of the stage from the failing build:

          [Pipeline] stage
          [Pipeline] { (Build & Test)
          [Pipeline] getContext
          [Pipeline] sh
          [ice_build_master-GJXUN3EV4ANWC6X4LMPJ5JABSJMJOA7TXCRSWSJIU5TBSSGDUYAA] Running shell script
          + docker inspect -f . artifactory.acme.com:5000/custom-image:latest
          
          Error: No such object: artifactory.acme.com:5000/custom-image:latest
          [Pipeline] sh
          [ice_build_master-GJXUN3EV4ANWC6X4LMPJ5JABSJMJOA7TXCRSWSJIU5TBSSGDUYAA] Running shell script
          + docker pull artifactory.acme.com:5000/custom-image:latest
          latest: Pulling from custom-image
          a3ed95caeb02: Pulling fs layer
          1534505fcbc6: Pulling fs layer
          0b4b748d3628: Pulling fs layer
          8e9a47e0752c: Pulling fs layer
          81cf376579c5: Pulling fs layer
          0b4b748d3628: Waiting
          8e9a47e0752c: Waiting
          81cf376579c5: Waiting
          a3ed95caeb02: Verifying Checksum
          a3ed95caeb02: Download complete
          1534505fcbc6: Verifying Checksum
          1534505fcbc6: Download complete
          8e9a47e0752c: Download complete
          81cf376579c5: Verifying Checksum
          81cf376579c5: Download complete
          a3ed95caeb02: Pull complete
          0b4b748d3628: Verifying Checksum
          0b4b748d3628: Download complete
          1534505fcbc6: Pull complete
          0b4b748d3628: Pull complete
          8e9a47e0752c: Pull complete
          81cf376579c5: Pull complete
          Digest: sha256:6b8d0cefe22c82d3f58dd785a8910990a5cb0ce51a9fa609e665967691cad601
          Status: Downloaded newer image for artifactory.acme.com:5000/custom-image:latest
          [Pipeline] withDockerContainer
          j1-rhel6-T2Medium (i-0a8d8923cdbf1d95b) does not seem to be running inside a container
          $ docker run -t -d -u 500:500 --user=root -w /home/ec2-user/workspace/ice_build_master-GJXUN3EV4ANWC6X4LMPJ5JABSJMJOA7TXCRSWSJIU5TBSSGDUYAA -v /home/ec2-user/workspace/ice_build_master-GJXUN3EV4ANWC6X4LMPJ5JABSJMJOA7TXCRSWSJIU5TBSSGDUYAA:/home/ec2-user/workspace/ice_build_master-GJXUN3EV4ANWC6X4LMPJ5JABSJMJOA7TXCRSWSJIU5TBSSGDUYAA:rw,z -v /home/ec2-user/workspace/ice_build_master-GJXUN3EV4ANWC6X4LMPJ5JABSJMJOA7TXCRSWSJIU5TBSSGDUYAA@tmp:/home/ec2-user/workspace/ice_build_master-GJXUN3EV4ANWC6X4LMPJ5JABSJMJOA7TXCRSWSJIU5TBSSGDUYAA@tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** artifactory.acme.com:5000/custom-image:latest cat
          $ docker top 3ae2a0fa20e8d46e1389cdf6949304249583bed62858c55e89196cee49888870
          ERROR: The container started but didn't run the expected command. Please double check your ENTRYPOINT does execute the command passed as docker run argument. See https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#entrypoint for entrypoint best practices.
          [Pipeline] {
          [Pipeline] sh
          Post stage
          [ice_build_master-GJXUN3EV4ANWC6X4LMPJ5JABSJMJOA7TXCRSWSJIU5TBSSGDUYAA] Running shell script
          + cd extensions
          /home/ec2-user/workspace/ice_build_master-GJXUN3EV4ANWC6X4LMPJ5JABSJMJOA7TXCRSWSJIU5TBSSGDUYAA@tmp/durable-14298997/script.sh: line 2: cd: extensions: No such file or directory
          [Pipeline] junit
          Recording test results
          [Pipeline] }
          $ docker stop --time=1 3ae2a0fa20e8d46e1389cdf6949304249583bed62858c55e89196cee49888870
          $ docker rm -f 3ae2a0fa20e8d46e1389cdf6949304249583bed62858c55e89196cee49888870
          [Pipeline] // withDockerContainer
          [Pipeline] }
          [Pipeline] // stage

          So, you are right, it's only a warning. However, the subsequent step fails because it looks like it's not executed in the container but on the Jenkins host.

          Marc Philipp added a comment - - edited hendrikhalkow The image we're using does not use a custom entrypoint. We use it in a build stage like this: stage( 'Build & Test' ) {     agent {         docker {             image 'custom-image'             args '--user=root'             reuseNode true         }     }     steps {         sh 'cd extensions && make reset clean upgrade test package '     }     post {         always {             junit 'extensions/_build/test/*.xml'         }     } } This was the output of the stage from the failing build: [Pipeline] stage [Pipeline] { (Build & Test) [Pipeline] getContext [Pipeline] sh [ice_build_master-GJXUN3EV4ANWC6X4LMPJ5JABSJMJOA7TXCRSWSJIU5TBSSGDUYAA] Running shell script + docker inspect -f . artifactory.acme.com:5000/custom-image:latest Error: No such object: artifactory.acme.com:5000/custom-image:latest [Pipeline] sh [ice_build_master-GJXUN3EV4ANWC6X4LMPJ5JABSJMJOA7TXCRSWSJIU5TBSSGDUYAA] Running shell script + docker pull artifactory.acme.com:5000/custom-image:latest latest: Pulling from custom-image a3ed95caeb02: Pulling fs layer 1534505fcbc6: Pulling fs layer 0b4b748d3628: Pulling fs layer 8e9a47e0752c: Pulling fs layer 81cf376579c5: Pulling fs layer 0b4b748d3628: Waiting 8e9a47e0752c: Waiting 81cf376579c5: Waiting a3ed95caeb02: Verifying Checksum a3ed95caeb02: Download complete 1534505fcbc6: Verifying Checksum 1534505fcbc6: Download complete 8e9a47e0752c: Download complete 81cf376579c5: Verifying Checksum 81cf376579c5: Download complete a3ed95caeb02: Pull complete 0b4b748d3628: Verifying Checksum 0b4b748d3628: Download complete 1534505fcbc6: Pull complete 0b4b748d3628: Pull complete 8e9a47e0752c: Pull complete 81cf376579c5: Pull complete Digest: sha256:6b8d0cefe22c82d3f58dd785a8910990a5cb0ce51a9fa609e665967691cad601 Status: Downloaded newer image for artifactory.acme.com:5000/custom-image:latest [Pipeline] withDockerContainer j1-rhel6-T2Medium (i-0a8d8923cdbf1d95b) does not seem to be running inside a container $ docker run -t -d -u 500:500 --user=root -w /home/ec2-user/workspace/ice_build_master-GJXUN3EV4ANWC6X4LMPJ5JABSJMJOA7TXCRSWSJIU5TBSSGDUYAA -v /home/ec2-user/workspace/ice_build_master-GJXUN3EV4ANWC6X4LMPJ5JABSJMJOA7TXCRSWSJIU5TBSSGDUYAA:/home/ec2-user/workspace/ice_build_master-GJXUN3EV4ANWC6X4LMPJ5JABSJMJOA7TXCRSWSJIU5TBSSGDUYAA:rw,z -v /home/ec2-user/workspace/ice_build_master-GJXUN3EV4ANWC6X4LMPJ5JABSJMJOA7TXCRSWSJIU5TBSSGDUYAA@tmp:/home/ec2-user/workspace/ice_build_master-GJXUN3EV4ANWC6X4LMPJ5JABSJMJOA7TXCRSWSJIU5TBSSGDUYAA@tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** artifactory.acme.com:5000/custom-image:latest cat $ docker top 3ae2a0fa20e8d46e1389cdf6949304249583bed62858c55e89196cee49888870 ERROR: The container started but didn't run the expected command. Please double check your ENTRYPOINT does execute the command passed as docker run argument. See https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#entrypoint for entrypoint best practices. [Pipeline] { [Pipeline] sh Post stage [ice_build_master-GJXUN3EV4ANWC6X4LMPJ5JABSJMJOA7TXCRSWSJIU5TBSSGDUYAA] Running shell script + cd extensions /home/ec2-user/workspace/ice_build_master-GJXUN3EV4ANWC6X4LMPJ5JABSJMJOA7TXCRSWSJIU5TBSSGDUYAA@tmp/durable-14298997/script.sh: line 2: cd: extensions: No such file or directory [Pipeline] junit Recording test results [Pipeline] } $ docker stop --time=1 3ae2a0fa20e8d46e1389cdf6949304249583bed62858c55e89196cee49888870 $ docker rm -f 3ae2a0fa20e8d46e1389cdf6949304249583bed62858c55e89196cee49888870 [Pipeline] // withDockerContainer [Pipeline] } [Pipeline] // stage So, you are right, it's only a warning. However, the subsequent step fails because it looks like it's not executed in the container but on the Jenkins host.

          Santiago Mola added a comment -

          It seems the fix to this was a regression from 1.14 to 1.15. Our Docker pipelines are now failing and we had to downgrade to 1.14.

          Santiago Mola added a comment - It seems the fix to this was a regression from 1.14 to 1.15. Our Docker pipelines are now failing and we had to downgrade to 1.14.

          Looks like there is a `cat` command hardcoded somewhere.

          This is the log of execution using 1.14:

          09:19:06 $ docker run -t -d -u 1000:1000 -w /vol/jenkins-slave/workspace/teste -v /vol/jenkins-slave/workspace/teste:/vol/jenkins-slave/workspace/teste:rw,z -v /vol/jenkins-slave/workspace/teste@tmp:/vol/jenkins-slave/workspace/teste@tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** --entrypoint cat alpine/git
          

          And this is the log of same job, using 1.15:

          09:17:32 $ docker run -t -d -u 1000:1000 -w /vol/jenkins-slave/workspace/teste -v /vol/jenkins-slave/workspace/teste:/vol/jenkins-slave/workspace/teste:rw,z -v /vol/jenkins-slave/workspace/teste@tmp:/vol/jenkins-slave/workspace/teste@tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** alpine/git cat
          

          look at the end of log, there  is a `cat` command there, and this is not in the entrypoint of image.

           

          Alexandre Silveira added a comment - Looks like there is a `cat` command hardcoded somewhere. This is the log of execution using 1.14: 09:19:06 $ docker run -t -d -u 1000:1000 -w /vol/jenkins-slave/workspace/teste -v /vol/jenkins-slave/workspace/teste:/vol/jenkins-slave/workspace/teste:rw,z -v /vol/jenkins-slave/workspace/teste@tmp:/vol/jenkins-slave/workspace/teste@tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** --entrypoint cat alpine/git And this is the log of same job, using 1.15: 09:17:32 $ docker run -t -d -u 1000:1000 -w /vol/jenkins-slave/workspace/teste -v /vol/jenkins-slave/workspace/teste:/vol/jenkins-slave/workspace/teste:rw,z -v /vol/jenkins-slave/workspace/teste@tmp:/vol/jenkins-slave/workspace/teste@tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** alpine/git cat look at the end of log, there  is a `cat` command there, and this is not in the entrypoint of image.  

          Chisel Wright added a comment -

          We've had to downgrade to 1.14 for the same reason:

          ... --entrypoint cat in.house/thingy:31

          became

          ... in.house/thingy:31 cat

          and docker steps started failing

          Chisel Wright added a comment - We've had to downgrade to 1.14 for the same reason: ... --entrypoint cat in.house/thingy:31 became ... in.house/thingy:31 cat and docker steps started failing

          Leo Luz added a comment - - edited

          We are using 1.15 in our company and we are having the same issue. My docker image has an entrypoint defined and image.inside() invokes docker as:

          docker run -t -d -u 1000:1000 -w /var/jenkins/workspace/....myimage:mytag cat

          The problem is that cat is not a recognized flag for my entrypoint and it fails in the next command:

          $ docker top 52da7791facd2487ad299b815207b539f7c0f54ec0c53c66337b25f81a6c2bb4

          with the error:

          java.io.IOException: Failed to run top '52da7791facd2487ad299b815207b539f7c0f54ec0c53c66337b25f81a6c2bb4'. Error: Error response from daemon: Container 52da7791facd2487ad299b815207b539f7c0f54ec0c53c66337b25f81a6c2bb4 is not running
           at org.jenkinsci.plugins.docker.workflow.client.DockerClient.listProcess(DockerClient.java:140)
           at org.jenkinsci.plugins.docker.workflow.WithContainerStep$Execution.start(WithContainerStep.java:185)
           at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:229)
           at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:153)
           at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:108)
           at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:48)
           at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
           at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
           at com.cloudbees.groovy.cps.sandbox.DefaultInvoker.methodCall(DefaultInvoker.java:19)

          As a workaround Im doing in my Jenkinsfile:

          script {↩
              image.inside('--entrypoint ""') {↩
                  // do something
              }↩
          }↩

          This fixes my issue because I don't need the entrypoint definition at this stage but it would be nice to have the plugin working as expected.

          Thank you!

          Leo Luz added a comment - - edited We are using 1.15 in our company and we are having the same issue. My docker image has an entrypoint defined and image.inside() invokes docker as: docker run -t -d -u 1000:1000 -w /var/jenkins/workspace/....myimage:mytag cat The problem is that cat is not a recognized flag for my entrypoint and it fails in the next command: $ docker top 52da7791facd2487ad299b815207b539f7c0f54ec0c53c66337b25f81a6c2bb4 with the error: java.io.IOException: Failed to run top '52da7791facd2487ad299b815207b539f7c0f54ec0c53c66337b25f81a6c2bb4'. Error: Error response from daemon: Container 52da7791facd2487ad299b815207b539f7c0f54ec0c53c66337b25f81a6c2bb4 is not running at org.jenkinsci.plugins.docker.workflow.client.DockerClient.listProcess(DockerClient.java:140) at org.jenkinsci.plugins.docker.workflow.WithContainerStep$Execution.start(WithContainerStep.java:185) at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:229) at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:153) at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:108) at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:48) at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113) at com.cloudbees.groovy.cps.sandbox.DefaultInvoker.methodCall(DefaultInvoker.java:19) As a workaround Im doing in my Jenkinsfile: script {↩ image.inside('--entrypoint ""') {↩ // do something }↩ }↩ This fixes my issue because I don't need the entrypoint  definition at this stage but it would be nice to have the plugin working as expected. Thank you!

          The 1.15 release really screwed up all of our builds involving containers. Reverted to 1.14.

          Hans Kristian Flaatten added a comment - The 1.15 release really screwed up all of our builds involving containers. Reverted to 1.14.

          Nik Reiman added a comment -

          Ditto here, 1.15 has caused a bunch of our container-related builds to fail. Looking at the diff of https://github.com/jenkinsci/docker-workflow-plugin/pull/116/files, it seems that the "detection" is looking for `cat` commands, which we are not using within `docker.image.inside`.

          Sadly the `–entrypoint ""` workaround also does not work in our case and we have reverted to 1.14 for now. Is there another JIRA issue already to track this regression, or should this one be re-opened?

          Nik Reiman added a comment - Ditto here, 1.15 has caused a bunch of our container-related builds to fail. Looking at the diff of https://github.com/jenkinsci/docker-workflow-plugin/pull/116/files,  it seems that the "detection" is looking for `cat` commands, which we are not using within `docker.image.inside`. Sadly the `–entrypoint ""` workaround also does not work in our case and we have reverted to 1.14 for now. Is there another JIRA issue already to track this regression, or should this one be re-opened?

          Please don't re-open this issue, regression has been caught already and addressed in JENKINS-49278

          Nicolas De Loof added a comment - Please don't re-open this issue, regression has been caught already and addressed in  JENKINS-49278

          sachin gupta added a comment -

          I'm still facing the same issue with Jenkins version 2.161,

          I have search many issues opened and closed without any proper resolution to this problem, please provide the suitable resolution

          sachin gupta added a comment - I'm still facing the same issue with Jenkins version 2.161, I have search many issues opened and closed without any proper resolution to this problem, please provide the suitable resolution

          Ben Faucher added a comment -

          I just ran into this issue. Is this going to be addressed? Why is the entrypoint overridden in the first place?

          Ben Faucher added a comment - I just ran into this issue. Is this going to be addressed? Why is the entrypoint overridden in the first place?

            Unassigned Unassigned
            cleclerc Cyrille Le Clerc
            Votes:
            8 Vote for this issue
            Watchers:
            32 Start watching this issue

              Created:
              Updated: