-
Bug
-
Resolution: Won't Fix
-
Major
-
Ubuntu 14.04
Jenkins ver. 1.609.1
docker-custom-build-environment plugin ver. 1.2
Steps to reproduce:
- create new freestyle job
- enable Build inside a Docker container
- select Pull docker image from repository
- fill in Image id/tag (any image, e.g. ubuntu)
- click Advanced
- set Docker server URI to a remote docker server, e.g. tcp://192.168.2.14:2375
- add a Execute shell build step
- use anything as Command (e.g. env)
- save and run the job
Expected outcome:
Job succeeds, the command is run inside a remote container.
Actual outcome:
Job fails, console output contains:
Started by user anonymous
Building on master in workspace /var/jenkins_home/jobs/test-dock-env/workspace
...
$ docker run --tty --detach --user 1000:1000 --workdir /var/jenkins_home/jobs/test-dock-env/workspace --volume /var/jenkins_home/jobs/test-dock-env/workspace:/var/jenkins_home/jobs/test-dock-env/workspace:rw --volume /tmp:/tmp:rw --env ******** ... ubuntu cat
Docker container e93275a1a8256acd427bc21e1b9ab9850695198d231dad4792326e33b5d6d362 started to host the build
[workspace] $ docker exec --tty e93275a1a8256acd427bc21e1b9ab9850695198d231dad4792326e33b5d6d362 /bin/sh -xe /tmp/hudson8942168893563194578.sh
/bin/sh: 0: Can't open /tmp/hudson8942168893563194578.sh
Build step 'Execute shell' marked build as failure
Notes:
Basically this can't work as the plugin assumes it can share source/scripts with the container using host bind mounts, which is not the case when using a remote docker server. Same situation arises when Jenkins itself is run in a container.
I don't currently have a great idea on how to solve this.
AFAICT the only concept that would work with remote dockers would be to stream the source/scripts to the container in a similar way as docker client streams context to the daemon during a build.
- is related to
-
JENKINS-29239 Doesn't work when Jenkins itself is containerized
-
- Open
-
Great plugin! Therefore I hit the same issue there.
In order for the plugin to equally work with remote containers, we would to follow something like that:
$ tar -cf - /path/to/workspace | docker exec -i ef01 /bin/sh -c "mkdir -p /tmp/workspace && tar -C /tmp/workspace -xf -"
$ cat /tmp/hudson_abcd.sh | docker exec -i ef01 /bin/sh -c "cat > /tmp/build_step.sh && /tmp/build_step.sh"
Pros: we don't need to share volumes anymore with the spawn container.
Cons: we have the overhead of files copy.
I don't know what's the plan for docker-compose integration, but I think the previous steps would work the same assuming we know the ID of the container we want use to run build steps.