-
New Feature
-
Resolution: Won't Fix
-
Major
when using kubernetes and jenkins docker workflow, its possible to use docker-in-docker (dind) in a slave or to try share the local docker daemon.
Though ideally it'd be great if using jenkins and kubernetes together (e.g. with Atomic / OpenShift / OpenStack / Google GKE / vanilla kubernetes) that we let kubernetes takes care of provisioning all the docker containers; pulling images and restarting any failed pods if the machine thats running a jenkins workflow has issues (or the pod dies).
To do that nicely on kubernetes we'd need to turn each Docker Workflow script into a Pod; with a docker container to run the main groovy workflow process; then for each container in the
docker.image("foo") {}
block we'd add a container to the pod.
e.g. this workflow
docker.image("maven") { // some stuff } docker.image("nodejs") { // some stuff }
would be turned into a Pod with these containers:
- workflow
- maven
- nodejs
Then the workflow container could then talk directly to the other docker containers using localhost as the Pod would know all the ports of each docker container - and it'd be easy to share the build volume between each container nicely.
One thing to be careful of is that right now in Kubernetes; a Pod definition is static; so rather than imperatively iterating through the Groovy DSL for the workflow; we'd have to have a kind of 'compile' stage where we evaluate all the `docker.image` blocks; then once we know them, we can generate a Pod which has the docker images baked into it which we can then start. Once the pod starts; all the containers would be provisioned together on the same host (and atomically destroyed at the end of the build)
A bit more background on this issue
https://github.com/fabric8io/fabric8/issues/4340