-
Bug
-
Resolution: Unresolved
-
Major
-
None
-
Jenkins 2.19.1
The job is run on a physical Linux slave/node machine with:
- Username: jenkins-slave-1.
- User home: /home/jenkins-slave-1.
- Remote slave root: /home/jenkins-slave-1/slave-root.
- Docker available for the jenkins-slave-1 user.
I am trying to have Jenkins jobs built within Docker containers using the Custom Build Environment plugin preserve the Gradle cache (for the downloaded artifacts and the wrapper). For this, I am adding a custom volume $HOME/.gradle -> $WORKSPACE/.gradle, similarly what the help says. This is the job log (skipped env vars for brevity):
$ docker run --rm --entrypoint /bin/true alpine:3.2
$ docker run --tty --rm --entrypoint /sbin/ip alpine:3.2 route
$ docker run --tty --detach --workdir /home/jenkins-slave-1/slave-root/workspace/docker-test --volume /home/jenkins-slave-1/slave-root:/home/jenkins-slave-1/slave-root:rw --volume /home/jenkins-slave-1/.gradle:/home/jenkins-slave-1/slave-root/workspace/docker-test/.gradle:rw --volume $HOME/.gradle:$WORKSPACE/.gradle:rw --volume /tmp:/tmp:rw openjdk:8u102 /bin/cat
Docker container e4baaf3f35f962415e4e769f408e9721b7d4f0e732c47d59631e8bbd0351ac43 started to host the build
$ docker exec --tty e4baaf3f35f962415e4e769f408e9721b7d4f0e732c47d59631e8bbd0351ac43 env
[docker-test] $ docker exec --tty --user 1003:1003 e4baaf3f35f962415e4e769f408e9721b7d4f0e732c47d59631e8bbd0351ac43 /bin/bash -xe /tmp/hudson8646767258349618960.sh
[Gradle] - Launching build.
[docker-test] $ docker exec --tty --user 1003:1003 e4baaf3f35f962415e4e769f408e9721b7d4f0e732c47d59631e8bbd0351ac43 /home/jenkins-slave-1/slave-root/workspace/docker-test/gradlew --gradle-user-home /home/jenkins-slave-1/slave-root/workspace/docker-test/.gradle --no-daemon -PfindbugsXml clean build
There seem to be a few issues:
- The additional volume is specified twice, once with the env vars used in the configuration and once expanded (seen in the third 'docker run' statement).
- It apparently doesn't clean such additional volume up. This can be check by running 'docker volume ls -f dangling=true' before the job is run, and afterwards. Each new run creates a new dangling volume. I'm not sure whether it has anything to do with the first issue, the double --volume specification.
- If the /home/jenkins-slave-1/.gradle (the mounted directory from the host) doesn't exist, it is created but owned by the root user. The reason is that the initial 'docker run' creating the container is not run with the jenkins-slave-1 user, but whatever the image sets (I'm using openjdk:8u102), and subsequent calls specify '--user 1003:1003' (1003 is the userid / groupip if the jenkins-slave-1 user on the node). This prevents the job from running, as the folder is not writable by the task called using a later 'docker exec --user 1003:1003', as it can't write to that directory, and the Gradle call breaks.
Expected behavior:
- The custom volume directory is writable by the user the build is running as.
- After the build, custom volumes are cleanup up.