-
Improvement
-
Resolution: Fixed
-
Minor
-
None
Right now the plugin will launch a single Docker container and mount the job folder from host machine. The parameters used will start the container with the same user and group as the host so that the Docker container can write these files.
This is a problem when you need to run Docker as root, for example when running Docker-in-Docker.
To address this, the plugin could create a separate, storage only container, which mounts the host's folder with the right credentials. After that, the job Docker container would start with whatever user we need and mount the data container with 'volumes-from'.
This way we have best of both worlds - ability to use any user in the job container, and write to host's filesystem.
It might be worth exploring which option would work better in terms of performance and concurrency - having a single storage container per host or create a storage container for each job that is being requested.
- is related to
-
JENKINS-34194 docker volume mounts not working as expected when run from within swarm container
-
- Resolved
-
[JENKINS-29760] Docker in Docker support - use volumes-from for handling host filesystem
Description |
New:
Right now the plugin will launch a single Docker container and mount the job folder from host machine. The parameters used will start the container with the same user and group as the host so that the Docker container can write these files. This is a problem when you need to run Docker as root, for example when running Docker-in-Docker. To address this, the plugin could create a separate, storage only container, which mounts the host's folder with the right credentials. After that, the job Docker container would start with whatever user we need and mount the data container with 'volumes-from'. This way we have best of both worlds - ability to use any user in the job container, and write to host's filesystem. It might be worth exploring which option would work better in terms of performance and concurrency - having a single storage container per host or create a storage container for each job that is being requested. |
Environment |
Original:
Right now the plugin will launch a single Docker container and mount the job folder from host machine. The parameters used will start the container with the same user and group as the host so that the Docker container can write these files. This is a problem when you need to run Docker as root, for example when running Docker-in-Docker. To address this, the plugin could create a separate, storage only container, which mounts the host's folder with the right credentials. After that, the job Docker container would start with whatever user we need and mount the data container with 'volumes-from'. This way we have best of both worlds - ability to use any user in the job container, and write to host's filesystem. It might be worth exploring which option would work better in terms of performance and concurrency - having a single storage container per host or create a storage container for each job that is being requested. |
Resolution | New: Fixed [ 1 ] | |
Status | Original: Open [ 1 ] | New: Closed [ 6 ] |
Link |
New:
This issue is related to |
Workflow | Original: JNJira [ 164835 ] | New: JNJira + In-Review [ 209091 ] |
Proposed approach won't fix the permission issue : as long as a process do run with arbitrary user ID, it will create files in workspace with unexpected permission that jenkins won't be able to handle later.
About DinD, it's possible to make the lanch command configurable, so you can run `wrapdocker /bin/cat`, then have all subsequent build steps ran with `docker exec` with the user option set. But you won't then be able to run nested containers until you have prepared your docker image so Jenkins build user belongs the docker group :-\