-
Improvement
-
Resolution: Fixed
-
Minor
-
None
With the traditional pipeline we have been able to do a mixture of working with the Jenkins workspace and using docker.inside() to pull in docker containers which have additional build tools needed to do a build. Docker.inside() maps the jenkins workspace to docker volume, so that what happens inside the docker container can read from, and write to the jenkins workspace.
Part of this is our build environment. The builds all run from slaves which run as docker containers inside Kubernetes. Those pods run privileged, so that they can reach the docker socket on the host to spin up "peer" docker containers when the docker.inside() commands are run. This is needed in order to run docker commands and to build new docker containers.
So for instance a slave does a checkout scm. gathers information about the repository commit using git commands, then does docker.inside() to pull in a pre-configured docker image with all the necessary npm modules already installed. It runs the gulp tasks inside() the container which thanks to the way things are mapped, writes the gulp output back to the jenkins workpace.
Then the inside() block closes, and the next steps of the pipeline do a docker build, and docker push to our registry.
I initially tired doing this in declarative pipeline by using an agent statement at the top just inside the pipeline{
agent label:'swarm', docker:'our-registry/npm-build:latest'
This initially failed because while the slave has git on it, git does not exist inside that npm-build image, so I couldn't use the git commands to determine the url and git commit hash
I initially added git to that image, and that part worked, but then I realized I could go no further, since there was no way to run docker commands inside this image which was already being run inside a docker container on the slave. I had no way of making it privileged so that I could access /var/run/docker.sock on the host.
I tried setting
pipeline{
agent label:'swarm'
which worked for the first part, of pulling in the code, and running the git commands, but then in another stage I tried
stage('gulp build'){
agent label:'swarm', docker:'our-registry/npm-build:latest'
but was greeted with a completely blank workspace.
I know I can use stash/unstash, to run the git commands on the slave workspace, then stash, then unstash inside the docker container, run the gulp commands, stash again, then grab a blank workspace again on slave and unstash, but the inefficiencies in doing that bug me. It would become a nightmare in some of our more complex workflows that use docker.inside() multiple times, and even have multiple different containers, each configured for just the tool needed for that stage.
I assume I can probably do some of that if I use a script{} block, but was hoping to avoid that as much as possible, as I hope to hand these pipelines off to people not as familiar with Jenkins and pipeline in general.
I'm just looking for a way to do docker.inside() like commands where we pull in another docker image and do work in there, but still have access to the workspace.