Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-29239

Doesn't work when Jenkins itself is containerized

      When Jenkins master run inside a container and uses local executor, can't use docker-custom-build-environment as build container can't bind-mount the workspace and temp dir.

      Possible solution is to run the build container with `--volume-from` so it can access master's JENKINS_HOME.

          [JENKINS-29239] Doesn't work when Jenkins itself is containerized

          Hello ndeloof

          You tell about --volume-from, you mean docker run --volumes-form $BUILD_CONTAINER_ID?

          Julien Garcia Gonzalez added a comment - Hello ndeloof You tell about --volume-from, you mean docker run --volumes-form $BUILD_CONTAINER_ID?

          Peter Jones added a comment -

          juliengarcia This means that if you launch the Jenkins docker container with a docker --volume parameter that shares the host system's /var/jenkins_home and /tmp directories, then this plug-in can work when being used by Jenkins that is, itself, a container.

          Here is a snippet of the docker run cmd line I use: docker run -v /var/jenkins_home:/var/jenkins_home -v /tmp:/tmp -p 8080:8080 -p 50000:50000 -d jenkins

          Peter Jones added a comment - juliengarcia This means that if you launch the Jenkins docker container with a docker --volume parameter that shares the host system's /var/jenkins_home and /tmp directories, then this plug-in can work when being used by Jenkins that is, itself, a container. Here is a snippet of the docker run cmd line I use: docker run -v /var/jenkins_home:/var/jenkins_home -v /tmp:/tmp -p 8080:8080 -p 50000:50000 -d jenkins

          As I may have more than one jenkins master running on one docker host, I cannot add -v /var/jenkins_home:/var/jenkins_home at run time. So, I've solved this by creating a symlink in the Jenkins container from /var/jenkins_home to the same foldername as my mount point on the host to /var/jenkins_home and told the Jenkins master that $JENKINS_HOME is equal to this symlink.
          So (no ports exposed as I access jenkins via an apache container in the same network):

          docker run -v /var/run/docker.sock:/var/run/docker.sock -v /bin/docker:/bin/docker \
             -v /usr/lib64/libdevmapper.so.1.02:/usr/lib/libdevmapper.so.1.01 \
             -v /data/jenkins_01/home:/var/jenkins_home -v /tmp:/tmp --env JENKINS_HOME=/data/jenkins_01/home \
             --name jenkins_01 -d myjenkins
          

          Last line of my Dockerfile:

          ENTRYPOINT /usr/local/bin/prepare_jenkins.sh \
            && HOME=$JENKINS_HOME gosu jenkins /bin/tini -- /usr/local/bin/jenkins.sh
          

          prepare_jenkins.sh:

          if [[ "${JENKINS_HOME}" !=  "/var/jenkins_home" ]]; then
             mkdir -p $(dirname ${JENKINS_HOME})
             ln -sf -T /var/jenkins_home ${JENKINS_HOME}
          fi
          grep docker /etc/group || groupadd -g $(stat -c "%g" /var/run/docker.sock) docker
          usermod -a -G docker jenkins
          

          Mariska Tallandtree added a comment - As I may have more than one jenkins master running on one docker host, I cannot add -v /var/jenkins_home:/var/jenkins_home at run time. So, I've solved this by creating a symlink in the Jenkins container from /var/jenkins_home to the same foldername as my mount point on the host to /var/jenkins_home and told the Jenkins master that $JENKINS_HOME is equal to this symlink. So (no ports exposed as I access jenkins via an apache container in the same network): docker run -v /var/run/docker.sock:/var/run/docker.sock -v /bin/docker:/bin/docker \ -v /usr/lib64/libdevmapper.so.1.02:/usr/lib/libdevmapper.so.1.01 \ -v /data/jenkins_01/home:/var/jenkins_home -v /tmp:/tmp --env JENKINS_HOME=/data/jenkins_01/home \ --name jenkins_01 -d myjenkins Last line of my Dockerfile: ENTRYPOINT /usr/local/bin/prepare_jenkins.sh \ && HOME=$JENKINS_HOME gosu jenkins /bin/tini -- /usr/local/bin/jenkins.sh prepare_jenkins.sh: if [[ "${JENKINS_HOME}" != "/ var /jenkins_home" ]]; then mkdir -p $(dirname ${JENKINS_HOME}) ln -sf -T / var /jenkins_home ${JENKINS_HOME} fi grep docker /etc/group || groupadd -g $(stat -c "%g" / var /run/docker.sock) docker usermod -a -G docker jenkins

          Chris Fraser added a comment -

          I just submitted a pull-request that addresses this issue. Comments appreciated!

          https://github.com/jenkinsci/docker-custom-build-environment-plugin/pull/38

          Chris Fraser added a comment - I just submitted a pull-request that addresses this issue. Comments appreciated! https://github.com/jenkinsci/docker-custom-build-environment-plugin/pull/38

          Chris Fraser added a comment - - edited

          I should also mention that accessing the Docker socket from within the Jenkins container not possible by default if the Jenkins process runs as a non-root user (which will be the case when running the official JenkinsCI container). This Stack Overflow answer covers how to open socket permissions up in this scenario:

          http://stackoverflow.com/a/33183227

          Chris Fraser added a comment - - edited I should also mention that accessing the Docker socket from within the Jenkins container not possible by default if the Jenkins process runs as a non-root user (which will be the case when running the official JenkinsCI container). This Stack Overflow answer covers how to open socket permissions up in this scenario: http://stackoverflow.com/a/33183227

          I run jenkins container as user jenkins, but before running jenkins in a 'prepare_jenkins.sh' step, I add jenkins user to the docker group, which is added if it does not exist already. See above code fragment. It works. No need to chmod a+s of /usr/bin/docker or sudo.

          Mariska Tallandtree added a comment - I run jenkins container as user jenkins, but before running jenkins in a 'prepare_jenkins.sh' step, I add jenkins user to the docker group, which is added if it does not exist already. See above code fragment. It works. No need to chmod a+s of /usr/bin/docker or sudo.

          Chris Fraser added a comment -

          tallandtree, that worked great, thanks for that! I'm using the official JenkinsCI container, which already has a group (users) with the same id (100) as the group which owns `docker.sock` on my docker host, so I tweaked your groupadd line to add the `-o` option, which permits the additon of a group with a non-unique GID.

          grep docker /etc/group || groupadd -g $(stat -c "%g" /var/run/docker.sock) -o docker
          

          Chris Fraser added a comment - tallandtree , that worked great, thanks for that! I'm using the official JenkinsCI container, which already has a group (users) with the same id (100) as the group which owns `docker.sock` on my docker host, so I tweaked your groupadd line to add the `-o` option, which permits the additon of a group with a non-unique GID. grep docker /etc/group || groupadd -g $(stat -c "%g" / var /run/docker.sock) -o docker

          Tim Gifford added a comment - - edited

          I'm running docker in a container and I get this error on every jenkins (docker) slave I try to run on:

          Error response from daemon: Cannot start container 3fc0dc09: [8] System error: no such file or directory
          22:04:54 FATAL: Failed to run docker image
          22:04:54 java.lang.RuntimeException: Failed to run docker image
          22:04:54 	at com.cloudbees.jenkins.plugins.docker_build_env.Docker.runDetached(Docker.java:226)
          22:04:54 	at com.cloudbees.jenkins.plugins.docker_build_env.DockerBuildWrapper.startBuildContainer(DockerBuildWrapper.java:202)
          22:04:54 	at com.cloudbees.jenkins.plugins.docker_build_env.DockerBuildWrapper.setUp(DockerBuildWrapper.java:175)
          22:04:54 	at hudson.model.Build$BuildExecution.doRun(Build.java:156)
          22:04:54 	at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:534)
          22:04:54 	at hudson.model.Run.execute(Run.java:1738)
          22:04:54 	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
          22:04:54 	at hudson.model.ResourceController.execute(ResourceController.java:98)
          22:04:54 	at hudson.model.Executor.run(Executor.java:410)
          22:04:54 Finished: FAILURE
          

          Including here to help other find this when internet searching.

          Tim Gifford added a comment - - edited I'm running docker in a container and I get this error on every jenkins (docker) slave I try to run on: Error response from daemon: Cannot start container 3fc0dc09: [8] System error: no such file or directory 22:04:54 FATAL: Failed to run docker image 22:04:54 java.lang.RuntimeException: Failed to run docker image 22:04:54 at com.cloudbees.jenkins.plugins.docker_build_env.Docker.runDetached(Docker.java:226) 22:04:54 at com.cloudbees.jenkins.plugins.docker_build_env.DockerBuildWrapper.startBuildContainer(DockerBuildWrapper.java:202) 22:04:54 at com.cloudbees.jenkins.plugins.docker_build_env.DockerBuildWrapper.setUp(DockerBuildWrapper.java:175) 22:04:54 at hudson.model.Build$BuildExecution.doRun(Build.java:156) 22:04:54 at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:534) 22:04:54 at hudson.model.Run.execute(Run.java:1738) 22:04:54 at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43) 22:04:54 at hudson.model.ResourceController.execute(ResourceController.java:98) 22:04:54 at hudson.model.Executor.run(Executor.java:410) 22:04:54 Finished: FAILURE Including here to help other find this when internet searching.

          Nickolas Fox added a comment - - edited

          Same problem to me, but in my case I run jobs on several slaves. That slaves uses standard jnlp connection (with slave.jar app) to jenkins master and that slaves are placed into docker containers (for its easy managment).

          Here I can see the following docker run in jenkins console

          docker run --tty --detach --workdir /var/jenkins_home/workspace/project-api-release --volume /var/jenkins_home:/var/jenkins_home:rw --volume jenkins-slave-workdir:/var/jenkins_home/:rw ...
          

          This could work for me because jenkins-slave-workdir is named volume for jenkins-slave container that keeps jenkins homedir in a standard docker volumes place. But also as you can see plugin itself is trying to use standard /var/jenkins_home ($JENKINS_HOME) implicitly which is the reason of a conflict.

          docker: Error response from daemon: Duplicate mount point '/var/jenkins_home'.
          See 'docker run --help'.
          FATAL: Failed to run docker image
          java.lang.RuntimeException: Failed to run docker image
          	at com.cloudbees.jenkins.plugins.docker_build_env.Docker.runDetached(Docker.java:226)
          	at com.cloudbees.jenkins.plugins.docker_build_env.DockerBuildWrapper.startBuildContainer(DockerBuildWrapper.java:202)
          	at com.cloudbees.jenkins.plugins.docker_build_env.DockerBuildWrapper.setUp(DockerBuildWrapper.java:175)
          	at hudson.model.Build$BuildExecution.doRun(Build.java:156)
          	at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:534)
          	at hudson.model.Run.execute(Run.java:1720)
          	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
          	at hudson.model.ResourceController.execute(ResourceController.java:98)
          	at hudson.model.Executor.run(Executor.java:404)
          

          It would be nice to have a simple control with this option because if I would use /var/jenkins_home as a volume for jenkins slave container everything works fine.

          Moreover pipeline docker plugin works with it pretty good.

          I've fixed it with a simple symbolic link from named volume container to /var/jenkins_home explicitly, but this is really bad solution.

          Nickolas Fox added a comment - - edited Same problem to me, but in my case I run jobs on several slaves. That slaves uses standard jnlp connection (with slave.jar app) to jenkins master and that slaves are placed into docker containers (for its easy managment). Here I can see the following docker run in jenkins console docker run --tty --detach --workdir / var /jenkins_home/workspace/project-api-release --volume / var /jenkins_home:/ var /jenkins_home:rw --volume jenkins-slave-workdir:/ var /jenkins_home/:rw ... This could work for me because jenkins-slave-workdir is named volume for jenkins-slave container that keeps jenkins homedir in a standard docker volumes place. But also as you can see plugin itself is trying to use standard /var/jenkins_home ($JENKINS_HOME) implicitly which is the reason of a conflict. docker: Error response from daemon: Duplicate mount point '/ var /jenkins_home' . See 'docker run --help' . FATAL: Failed to run docker image java.lang.RuntimeException: Failed to run docker image at com.cloudbees.jenkins.plugins.docker_build_env.Docker.runDetached(Docker.java:226) at com.cloudbees.jenkins.plugins.docker_build_env.DockerBuildWrapper.startBuildContainer(DockerBuildWrapper.java:202) at com.cloudbees.jenkins.plugins.docker_build_env.DockerBuildWrapper.setUp(DockerBuildWrapper.java:175) at hudson.model.Build$BuildExecution.doRun(Build.java:156) at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:534) at hudson.model.Run.execute(Run.java:1720) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43) at hudson.model.ResourceController.execute(ResourceController.java:98) at hudson.model.Executor.run(Executor.java:404) It would be nice to have a simple control with this option because if I would use /var/jenkins_home as a volume for jenkins slave container everything works fine. Moreover pipeline docker plugin works with it pretty good. I've fixed it with a simple symbolic link from named volume container to /var/jenkins_home explicitly, but this is really bad solution.

            jonhermansen Jon Hermansen
            ndeloof Nicolas De Loof
            Votes:
            9 Vote for this issue
            Watchers:
            16 Start watching this issue

              Created:
              Updated: