Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-32709

Docker Pipeline plugin: docker.withServer still executes on local Jenkins server

    • Icon: Bug Bug
    • Resolution: Incomplete
    • Icon: Major Major
    • docker-workflow-plugin
    • None
    • Jenkins installed: OSX 10.10 local boot2docker 1.9.1, Docker 1.9.1 on Ubuntu VM

      Expect this to be executed on the remote Docker host with TLS configured, instead it is executed on the Jenkins-local Docker container (remote ip: 10.11.11.109, port: 2376, Docker credential ID: my-docker-creds) :

      docker.withServer('tcp://10.11.11.109:2376', 'my-docker-creds'){ s ->
        docker.image('httpd').inside { c ->
          sh 'uname -a'
        }
      }
      

      This is the console output:

      [Pipeline] node {
      [Pipeline] Sets up Docker server endpoint : Start
      [Pipeline] withDockerServer {
      [Pipeline] sh
      [fds] Running shell script
      + docker inspect -f . httpd
      .
      [Pipeline] Run build steps inside a Docker container : Start
      $ docker run -t -d -u 501:20 -w /Users/kzantow/jenkins_latest_jan_2016/workspace/fds -v /Users/kzantow/jenkins_latest_jan_2016/workspace/fds:/Users/kzantow/jenkins_latest_jan_2016/workspace/fds:rw -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** httpd cat
      [Pipeline] withDockerContainer {
      [Pipeline] sh
      ...
      

          [JENKINS-32709] Docker Pipeline plugin: docker.withServer still executes on local Jenkins server

          Jesse Glick added a comment -

          Possible duplicate of JENKINS-39243.

          Jesse Glick added a comment - Possible duplicate of JENKINS-39243 .

          Hello! I think am having the same issue as described by adam_aph and chenna. I still don't know if there is some configuration missing.

          In my case, I'm running Jenkins inside a container (according to https://hub.docker.com/_/jenkins/). This container doesn't have docker engine instaled. So, I am trying to make Jenkins to connect to the docker engine in the host using the "withServer" way and execute some image, but never works...

          Apparently, Jenkins executes the commands (e.g. running some container based on some image) in the same place where it is installed, instead of respecting the "withServer" parameters and execute in the "remote" host.

          Thank you for any instructions you may have.

           

          Julio Guimaraes added a comment - Hello! I think am having the same issue as described by adam_aph and chenna . I still don't know if there is some configuration missing. In my case, I'm running Jenkins inside a container (according to https://hub.docker.com/_/jenkins/ ). This container doesn't have docker engine instaled. So, I am trying to make Jenkins to connect to the docker engine in the host using the "withServer" way and execute some image, but never works... Apparently, Jenkins executes the commands (e.g. running some container based on some image) in the same place where it is installed, instead of respecting the "withServer" parameters and execute in the "remote" host. Thank you for any instructions you may have.  

          I'm using Docker Pipeline Plugin version 1.10 and I'm having the exact same issue as adam_aph describes. I have my Jenkins installed in a container. I have a remote server that runs a Docker daemon. The daemon is reachable from the Jenkins machine via TCP (tested). I disabled TLS security.

          I wasn't able to make the docker.withServer(...) step work.

          So as a test I simply put following content in a Jenkinsfile:

          docker.withServer('tcp://my.docker.host:2345') {
            def myImage = docker.build('myImage')
          }
          

          When the pipeline executes I get this error: script.sh: line 2: docker: command not found.

          Am I missing anything ? Is it not supposed to work ?

          Stéphane Tournié added a comment - I'm using Docker Pipeline Plugin version 1.10 and I'm having the exact same issue as adam_aph describes. I have my Jenkins installed in a container. I have a remote server that runs a Docker daemon. The daemon is reachable from the Jenkins machine via TCP (tested). I disabled TLS security. I wasn't able to make the docker.withServer(...) step work. So as a test I simply put following content in a Jenkinsfile: docker.withServer( 'tcp: //my.docker.host:2345' ) { def myImage = docker.build( 'myImage' ) } When the pipeline executes I get this error: script.sh: line 2: docker: command not found . Am I missing anything ? Is it not supposed to work ?

          Julio Guimaraes added a comment - - edited

          One year after my first post, I don't know if someone will be needing this yet, but maybe I can help someone:

          According to the official installation documentation today, it recommends using jenkinsci/blueocean image (not https://hub.docker.com/_/jenkins as it was in the past).

          So, I looked for the Dockerfile (blueocean-plugin/docker/official/Dockerfile) for this image in the jenkinsci/blueocean-plugin GitHub repository. And it contains these lines:

          # Add the docker binary so running Docker commands from the master works nicely
          RUN apk -U add docker
          

          In my tests using this new image I don't even map the "-v /var/run/docker.sock:/var/run/docker.sock". The results are:

          • It seems that the docker deamon is not started, so there aren't containers running inside the Jenkins container;
          • Now I can create pipelines in Jenkins  capable of connecting to remote docker servers (or even in the local machine) using docker.withServer('tcp://my.docker.host:2345'). 

          Apparently, this new image contains at least the docker client (which was not present in the old image) and it is necessary for using remote docker servers.

          Julio Guimaraes added a comment - - edited One year after my first post, I don't know if someone will be needing this yet, but maybe I can help someone: According to  the official installation documentation  today, it recommends using jenkinsci/blueocean image  (not https://hub.docker.com/_/jenkins  as it was in the past). So, I looked for the Dockerfile (blueocean-plugin/docker/official/Dockerfile) for this image in the jenkinsci/blueocean-plugin GitHub repository . And it contains these lines: # Add the docker binary so running Docker commands from the master works nicely RUN apk -U add docker In my tests using this new image I don't even map the "-v /var/run/docker.sock:/var/run/docker.sock". The results are: It seems that the docker deamon is not started, so there aren't containers running inside the Jenkins container; Now I can create pipelines in Jenkins  capable of connecting to remote docker servers (or even in the local machine) using docker.withServer('tcp://my.docker.host:2345').  Apparently, this new image contains at least the docker client (which was not present in the old image) and it is necessary for using remote docker servers.

          Sven Scheil added a comment - - edited

          Hi there,

          we have the same/similar problem. We want to start a container outside of our jenkins-master container.

          We try a PoC with this small Jenkinsfile Pipeline definition

          node {
              //checkout scm    docker.withServer('tcp://192.168.122.1:4243') {
                  docker.image('bash:latest').withRun() {
                      sh 'hostname -I'
                      sh 'sleep 60s'
                      sh 'hostname -I'
                  }
              }
          } 

          Jenkins-masterexecutes the pipeline successfully. Running the pipeline starts a new container at our specified host 192.168.122.1. We could monitor this with a 'docker container ls -a' command. But the three 'sh' commands won't be execute inside the spawned container. They are excuted inside the jenkins-master container.

          We have try for days now and have no more ideas.

          Any help is appreciated.

          Sven Scheil added a comment - - edited Hi there, we have the same/similar problem. We want to start a container outside of our jenkins-master container. We try a PoC with this small Jenkinsfile Pipeline definition node { //checkout scm docker.withServer( 'tcp://192.168.122.1:4243' ) { docker.image( 'bash:latest' ).withRun() { sh 'hostname -I' sh 'sleep 60s' sh 'hostname -I' } } } Jenkins-masterexecutes the pipeline successfully. Running the pipeline starts a new container at our specified host 192.168.122.1. We could monitor this with a 'docker container ls -a' command. But the three 'sh' commands won't be execute inside the spawned container. They are excuted inside the jenkins-master container. We have try for days now and have no more ideas. Any help is appreciated.

          Jesse Glick added a comment -

          svensche Because that is what withRun does. You give it a closure with a Container parameter and do whatever you like with that. Your example is throwing away the parameter, so it starts a container, does some unrelated stuff, then stops the container unused.

          Jesse Glick added a comment - svensche Because that is what withRun does. You give it a closure with a Container parameter and do whatever you like with that. Your example is throwing away the parameter, so it starts a container, does some unrelated stuff, then stops the container unused.

          Sven Scheil added a comment - - edited

          Thank you jglick. We found it yesterday later on in the cloudbee-docs (https://go.cloudbees.com/docs/plugins/docker-workflow/).

          As far as I see in the doc (http://jenkins-qs3.hmmh.ag/job/sven_pipeline_scripted/pipeline-syntax/globals) there are only three properties Container has. Is this all or am I missing some methods of Container?

          My understanding is to use container.id to make remote exec calls like this:

          sh "docker exec -i ${container.id} -c ‘cd /var/www/html/wordpress && git pull’"

          Is this the way to go when we want to inject commands in a remote container?

          Thanks again for your support.

          Sven Scheil added a comment - - edited Thank you jglick . We found it yesterday later on in the cloudbee-docs ( https://go.cloudbees.com/docs/plugins/docker-workflow/ ). As far as I see in the doc ( http://jenkins-qs3.hmmh.ag/job/sven_pipeline_scripted/pipeline-syntax/globals ) there are only three properties Container has. Is this all or am I missing some methods of Container ? My understanding is to use container.id to make remote exec calls like this: sh "docker exec -i ${container.id} -c ‘cd /var/www/html/wordpress && git pull’" Is this the way to go when we want to inject commands in a remote container? Thanks again for your support.

          Jesse Glick added a comment -

          Yes you can that sort of command if you prefer to avoid Image.inside.

          Jesse Glick added a comment - Yes you can that sort of command if you prefer to avoid Image.inside .

          Sven Scheil added a comment - - edited

          jglick How do I achieve this: A declarative jenkinsfile spawns an agent-container on a remote docker host by only the name of the image. The container entrypoint starts the jnlp-agent and jenkins-master. When the jnlp-agent is up and running the communication between jenkins-master and agent is established. Now the jenkins-master tells the agent whats the next action defined in the jenkinsfile. The agents executes each build-step and returns via jnlp the console log messages to the master.
          This is the way we like to utilize the docker-plugin and declarative jenkinsfiles, but the jnlp-communication is not established. It's only established when we use a predefined docker template. But we don't want to make use of docker templates. Our development-teams should specify an image of their choice inside the jenkinsfile. We want to have the configuration unter git control.
          We need a way to provide these informations

          • User
          • Jenkins URL
          • Connect method,
          • Remote File System Root,
          • etc.

          (normally given by the docker template configuration) inside the jenkinsfile. Is there a way to achieve this? Below is a PoC, where the container is spawned at our remote docker host but the checkout happens locally at the jenkins-master. We have defined a DOCKER_HOST environment variable that's pointing to our remote docker-host.

          pipeline {
          
              agent {
                  docker {
                      image 'cloudbees/java-with-docker-client'
                  }
              }
          
              stages {
                  stage('Example Build') {
                      steps {
                          echo 'Step1'
                          git changelog: false, credentialsId: 'SSH-Test', poll: false, url: 'ssh://git@X/Y/Z.git'
                      }
                  }
          
              }
          } 
          

          Sven Scheil added a comment - - edited jglick How do I achieve this: A declarative jenkinsfile spawns an agent-container on a remote docker host by only the name of the image. The container entrypoint starts the jnlp-agent and jenkins-master. When the jnlp-agent is up and running the communication between jenkins-master and agent is established. Now the jenkins-master tells the agent whats the next action defined in the jenkinsfile. The agents executes each build-step and returns via jnlp the console log messages to the master. This is the way we like to utilize the docker-plugin and declarative jenkinsfiles, but the jnlp-communication is not established. It's only established when we use a predefined docker template. But we don't want to make use of docker templates. Our development-teams should specify an image of their choice inside the jenkinsfile. We want to have the configuration unter git control. We need a way to provide these informations User Jenkins URL Connect method, Remote File System Root, etc. (normally given by the docker template configuration) inside the jenkinsfile. Is there a way to achieve this? Below is a PoC, where the container is spawned at our remote docker host but the checkout happens locally at the jenkins-master. We have defined a DOCKER_HOST environment variable that's pointing to our remote docker-host. pipeline { agent { docker { image 'cloudbees/java-with-docker-client' } } stages { stage( 'Example Build' ) { steps { echo 'Step1' git changelog: false , credentialsId: 'SSH-Test' , poll: false , url: 'ssh: //git@X/Y/Z.git' } } } } 

          Jesse Glick added a comment -

          Best to post to the Jenkins users’ list or similar for usage questions like this.

          Jesse Glick added a comment - Best to post to the Jenkins users’ list or similar for usage questions like this.

            jglick Jesse Glick
            kzantow Keith Zantow
            Votes:
            0 Vote for this issue
            Watchers:
            9 Start watching this issue

              Created:
              Updated:
              Resolved: