We have a Multi-branch job, where each branch contains a Jenkinsfile and a BuildKit-enhanced Dockerfile. Enabling BuildKit seems to require setting an environment variable when the Jenkins server is started; this is not a user-friendly technique.
Also, Jenkins shouldn't let environment variables "bleed through" like this; it should present a well-defined, consistent environment to build jobs. System-V init scripts had this problem; if you started them from the command line the service would inherit your user environment, causing much puzzlement.
Our Jenkinsfile has this agent definition at the top level:
There doesn't seem to be a documented way to set the environment variable `DOCKER_BUILDKIT=1`, or specify `docker buildx ....` (instead of `docker build ...`) inside the Jenkinsfile, which appear to be the only two ways of enabling BuildKit.
- Adding an environment variable inside the Jenkinsfile sets it inside the docker container, after "docker build" has finished running:
- There also doesn't seem to be an environment variable section in the multibranch configuration page, nor in the global Jenkins configuration. It looks like the variable has to already be set in the environment when Jenkins is started:
That being said, we did find that (in the advanced configuration of an ssh-launched node) that if we set "Prefix Start Agent Command" to be "export DOCKER_BUILDKIT=1 ; ", it would start the agent with that environment variable set, and then Buildkit would be available for dockerfile agents.
Initially I'd tried "DOCKER_BUILDKIT=1", but that expanded into
Setting that variable for just the 'cd' command didn't help much.
- Is there a way to enable BuildKit for a top-level dockerfile agent by way of a configuration option in the same Jenkinsfile?
- Is there a way to rigidly control the environment variables that Jenkins provides to jobs?
- I was really disappointed when I set up a Jenkins agent and none of my jobs would build. So was my boss. :-/
- This lack really detracts from the reproducibility of jobs.