Status: Closed (View Workflow)
Resolution: Not A Defect
Jenkins Server (2.204.1) with Docker plugin (1.1.9) and a docker cloud API.
I work with Jenkins docker agents (slaves)
And i map the docker slave build workspace between the container and the host in order to be able to path Artifacts to the downstream jobs.
in Jenkins Configuration - Docker Cloud Details - Container settings:
This works fine for a single build , The problem starts when i run concurrent builds,
They are all mapped to the same workspace on the Docker host and interfering each other.
What would be the best practice when using docker slaves and mapping workspace as a volume ?
I wouldn't like to use $CustomWorkspace or coping artifacts during the build as this is hard to manage and purge.
I would expect Docker plugin to act like the Jenkins regular slave approach of adding @2 to a second concurrent build but this is not the behavior when running concurrent builds on docker slaves
Jenkins expects every individual agent to have its own filesystem; it only uses suffixes when there's more than one executor running the same job on the same agent at the same time.
Docker provides each agent to have its own filesystem; the problem here is that you overrode that so that they all shared the same filesystem, and that's what broke things.
I'd suggest that you look at only sharing part of the filesystem so that download cache folders etc are shared but the Jenkins workspace area is not.