• Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Major Major
    • docker-workflow-plugin
    • None
    • Jenkins Core 2.73

      When running a build inside a docker container, some commands don't work because they rely on the user being properly set. For example, ssh doesn't work with the following error:

       

      No user exists for uid 150.

       

      I think this could be solved by append to passwd on container startup, something like this (untested, for proof of concept):

      if [ "$(id -u)" != "0" ]; then

          echo "jenkins:x:$(id -u):$(id -g):Jenkins:${HOME}:/sbin/nologin" >> /etc/passwd

      fi

          [JENKINS-47026] User not completely set in docker containers

          Nicola Worthington added a comment - - edited

          Manually modifying /etc/passwd  like this feels quite wrong to me. I understand that this is intended to be a lightweight fix that doesn't rely on extraneous system maintenance packages, but who is to say that any given Docker container is even configured to perform a lookup using /etc/passwd?

          I think the root problem here is that Jenkins relies upon a shared volume model as the transport mechanism to get source into the container, and the build artefacts back out. While this is arguably reasonable for the input vector (permissions of the files being injected can be left fairly open), we clearly see the problems when trying to get files back out.

          Would a better approach be to decouple container workspace filesystem from the host, and implement an RPC mechanism to deliver the build artefacts back to Jenkins.

          I believe GitLab CI does something similar with its job caching and artefact management, so running tasks inside the Docker container as root is not a problem.

          Nicola Worthington added a comment - - edited Manually modifying /etc/passwd  like this feels quite wrong to me. I understand that this is intended to be a lightweight fix that doesn't rely on extraneous system maintenance packages, but who is to say that any given Docker container is even configured to perform a lookup using /etc/passwd? I think the root problem here is that Jenkins relies upon a shared volume model as the transport mechanism to get source into the container, and the build artefacts back out. While this is arguably reasonable for the input vector (permissions of the files being injected can be left fairly open), we clearly see the problems when trying to get files back out. Would a better approach be to decouple container workspace filesystem from the host, and implement an RPC mechanism to deliver the build artefacts back to Jenkins. I believe GitLab CI does something similar with its job caching and artefact management, so running tasks inside the Docker container as root is not a problem.

          Waldek M added a comment - - edited

          I've prepared an example of how using this fixed, non-root user breaks RPM builds, using:

          I'll attach the content of Jenkinsfile and output log (slightly redacted) shortly; example failure:

           

          [docker-build-weeps] Running shell script
          + . /tmp/venv/bin/activate
          [...]
          + python setup.py bdist_rpm
          running bdist_rpm
          running egg_info
          writing semver.egg-info/PKG-INFO
          writing top-level names to semver.egg-info/top_level.txt
          writing dependency_links to semver.egg-info/dependency_links.txt
          reading manifest file 'semver.egg-info/SOURCES.txt'
          reading manifest template 'MANIFEST.in'
          writing manifest file 'semver.egg-info/SOURCES.txt'
          creating build/bdist.linux-x86_64
          creating build/bdist.linux-x86_64/rpm
          creating build/bdist.linux-x86_64/rpm/SOURCES
          creating build/bdist.linux-x86_64/rpm/SPECS
          creating build/bdist.linux-x86_64/rpm/BUILD
          creating build/bdist.linux-x86_64/rpm/RPMS
          creating build/bdist.linux-x86_64/rpm/SRPMS
          writing 'build/bdist.linux-x86_64/rpm/SPECS/semver.spec'
          running sdist
          running check
          creating semver-2.7.9
          creating semver-2.7.9/semver.egg-info
          making hard links in semver-2.7.9...
          hard linking MANIFEST.in -> semver-2.7.9
          hard linking README.rst -> semver-2.7.9
          hard linking semver.py -> semver-2.7.9
          hard linking setup.py -> semver-2.7.9
          hard linking semver.egg-info/PKG-INFO -> semver-2.7.9/semver.egg-info
          hard linking semver.egg-info/SOURCES.txt -> semver-2.7.9/semver.egg-info
          hard linking semver.egg-info/dependency_links.txt -> semver-2.7.9/semver.egg-info
          hard linking semver.egg-info/top_level.txt -> semver-2.7.9/semver.egg-info
          Writing semver-2.7.9/setup.cfg
          creating dist
          Creating tar archive
          removing 'semver-2.7.9' (and everything under it)
          copying dist/semver-2.7.9.tar.gz -> build/bdist.linux-x86_64/rpm/SOURCES
          building RPMs
          rpmbuild -ba --define _topdir /var/spool/jenkins/workspace/docker-build-weeps/build/bdist.linux-x86_64/rpm --clean build/bdist.linux-x86_64/rpm/SPECS/semver.spec
          error: Bad owner/group: /var/spool/jenkins/workspace/docker-build-weeps/build/bdist.linux-x86_64/rpm/SOURCES/semver-2.7.9.tar.gz
          error: command 'rpmbuild' failed with exit status 1

           

           

          Pipeline to reproduce the behaviour is pretty simple:

           

          pipeline {
              agent {
                  docker {
                      image "weakcamel/centos-python2-build:3"
                      label 'cam1'
                  }
              }
              stages {
                  stage('build and unit test') {
                      steps {
                          git changelog: false, poll: false, url: 'https://github.com/k-bx/python-semver.git'
                          script {
                              sh('virtualenv /tmp/venv')
                              sh '. /tmp/venv/bin/activate && python setup.py build'
                              sh('. /tmp/venv/bin/activate && python setup.py bdist_rpm')
                          }
                      }
                  }
              }
          }

          Waldek M added a comment - - edited I've prepared an example of how using this fixed, non-root user breaks RPM builds, using: a sample Docker image based on Centos ( https://hub.docker.com/r/weakcamel/centos-python2-build/) a (picked at almost random) simple PyPI package: https://github.com/k-bx/python-semver.git to use as source code base I'll attach the content of Jenkinsfile and output log (slightly redacted) shortly; example failure:   [docker-build-weeps] Running shell script + . /tmp/venv/bin/activate [...] + python setup.py bdist_rpm running bdist_rpm running egg_info writing semver.egg-info/PKG-INFO writing top-level names to semver.egg-info/top_level.txt writing dependency_links to semver.egg-info/dependency_links.txt reading manifest file 'semver.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' writing manifest file 'semver.egg-info/SOURCES.txt' creating build/bdist.linux-x86_64 creating build/bdist.linux-x86_64/rpm creating build/bdist.linux-x86_64/rpm/SOURCES creating build/bdist.linux-x86_64/rpm/SPECS creating build/bdist.linux-x86_64/rpm/BUILD creating build/bdist.linux-x86_64/rpm/RPMS creating build/bdist.linux-x86_64/rpm/SRPMS writing 'build/bdist.linux-x86_64/rpm/SPECS/semver.spec' running sdist running check creating semver-2.7.9 creating semver-2.7.9/semver.egg-info making hard links in semver-2.7.9... hard linking MANIFEST.in -> semver-2.7.9 hard linking README.rst -> semver-2.7.9 hard linking semver.py -> semver-2.7.9 hard linking setup.py -> semver-2.7.9 hard linking semver.egg-info/PKG-INFO -> semver-2.7.9/semver.egg-info hard linking semver.egg-info/SOURCES.txt -> semver-2.7.9/semver.egg-info hard linking semver.egg-info/dependency_links.txt -> semver-2.7.9/semver.egg-info hard linking semver.egg-info/top_level.txt -> semver-2.7.9/semver.egg-info Writing semver-2.7.9/setup.cfg creating dist Creating tar archive removing 'semver-2.7.9' (and everything under it) copying dist/semver-2.7.9.tar.gz -> build/bdist.linux-x86_64/rpm/SOURCES building RPMs rpmbuild -ba --define _topdir /var/spool/jenkins/workspace/docker-build-weeps/build/bdist.linux-x86_64/rpm --clean build/bdist.linux-x86_64/rpm/SPECS/semver.spec error: Bad owner/group: /var/spool/jenkins/workspace/docker-build-weeps/build/bdist.linux-x86_64/rpm/SOURCES/semver-2.7.9.tar.gz error: command 'rpmbuild' failed with exit status 1     Pipeline to reproduce the behaviour is pretty simple:   pipeline {     agent {         docker {             image "weakcamel/centos-python2-build:3"             label 'cam1'         }     }     stages {         stage('build and unit test') {             steps {                 git changelog: false, poll: false, url: 'https://github.com/k-bx/python-semver.git'                 script {                     sh('virtualenv /tmp/venv')                     sh '. /tmp/venv/bin/activate && python setup.py build'                     sh('. /tmp/venv/bin/activate && python setup.py bdist_rpm')                 }             }         }     } }

          Waldek M added a comment - - edited

          Note also that in case you try and use Python's `pip` command, there's a warning as well related to not fully set up user:

           
          {{ + pip install virtualenv}}
          {{ The directory '/.cache/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.}}
          {{ The directory '/.cache/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.}}
          {{ Requirement already satisfied: virtualenv in /usr/lib/python2.7/site-packages}}

          Waldek M added a comment - - edited Note also that in case you try and use Python's `pip` command, there's a warning as well related to not fully set up user:   {{ + pip install virtualenv}} {{ The directory '/.cache/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.}} {{ The directory '/.cache/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.}} {{ Requirement already satisfied: virtualenv in /usr/lib/python2.7/site-packages}}

          Waldek M added a comment -

          Related to what the Original Poster mentioned about SSH problems in this setup, there are general problems with using plain Git commands within the container (as initiated by Jenkins).

           

          a) Over SSH:

          $ docker run -u 1223:1223 -it weakcamel/centos-python2-build:3
          bash-4.2$ cd /tmp
          bash-4.2$ git clone git@github.com:k-bx/python-semver.git
          Cloning into 'python-semver'...
          No user exists for uid 1223
          fatal: Could not read from remote repository.
          
          Please make sure you have the correct access rights
          and the repository exists.

           

          b) over HTTPS as well:

           

          bash-4.2$ git clone https://github.com/k-bx/python-semver.git
          Cloning into 'python-semver'...
          remote: Counting objects: 534, done.
          remote: Total 534 (delta 0), reused 0 (delta 0), pack-reused 534
          Receiving objects: 100% (534/534), 89.52 KiB | 0 bytes/s, done.
          Resolving deltas: 100% (285/285), done.
          fatal: unable to look up current user in the passwd file: no such user
          Unexpected end of command stream
          bash-4.2$ echo $?
          128
          bash-4.2$ ls -al python-semver
          ls: cannot access python-semver: No such file or directory

          Waldek M added a comment - Related to what the Original Poster mentioned about SSH problems in this setup, there are general problems with using plain Git commands within the container (as initiated by Jenkins).   a) Over SSH: $ docker run -u 1223:1223 -it weakcamel/centos-python2-build:3 bash-4.2$ cd /tmp bash-4.2$ git clone git@github.com:k-bx/python-semver.git Cloning into 'python-semver' ... No user exists for uid 1223 fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists.   b) over HTTPS as well:   bash-4.2$ git clone https: //github.com/k-bx/python-semver.git Cloning into 'python-semver' ... remote: Counting objects: 534, done. remote: Total 534 (delta 0), reused 0 (delta 0), pack-reused 534 Receiving objects: 100% (534/534), 89.52 KiB | 0 bytes/s, done. Resolving deltas: 100% (285/285), done. fatal: unable to look up current user in the passwd file: no such user Unexpected end of command stream bash-4.2$ echo $? 128 bash-4.2$ ls -al python-semver ls: cannot access python-semver: No such file or directory

          Waldek M added a comment -

          I took the liberty to link issue JENKINS-49416 which is also a consequence of how is spinning Docker containers with an arbitrary user / group / entry point script.

          Waldek M added a comment - I took the liberty to link issue JENKINS-49416 which is also a consequence of how is spinning Docker containers with an arbitrary user / group / entry point script.

          Taras Bondarchuk added a comment - - edited

          Since I'm building agent from Dockerfile anyway, I've fixed this by:

           

          agent {
             dockerfile {
                additionalBuildArgs '--build-arg USER_ID=$(id -u) --build-arg GROUP_ID=$(id -g)'
             }
          }
          

          and in Dockerfile:

           

          ARG USER_ID=1000
          ARG GROUP_ID=1000
          RUN groupadd -g $GROUP_ID user && \
              useradd -u $USER_ID -s /bin/sh -g user user
          

          Taras Bondarchuk added a comment - - edited Since I'm building agent from Dockerfile anyway, I've fixed this by:   agent { dockerfile { additionalBuildArgs '--build-arg USER_ID=$(id -u) --build-arg GROUP_ID=$(id -g)' } } and in Dockerfile:   ARG USER_ID=1000 ARG GROUP_ID=1000 RUN groupadd -g $GROUP_ID user && \ useradd -u $USER_ID -s /bin/sh -g user user

          Waldek M added a comment - - edited

          Thanks for sharing the workaround, aliusmiles!

           

          Interestingly, passing such arguments to the prebuilt image with a `docker` closure did not work for me

           

          agent {
              docker {
                image 'foo'
                args '--env USER_ID=$(id -u){{ --env GROUP_ID=$(id -g)'
              }   
          }

          The values passed were literally "$(id -u)" (not interpreted.

          I'll give it a go and see.

          Waldek M added a comment - - edited Thanks for sharing the workaround, aliusmiles !   Interestingly, passing such arguments to the prebuilt image with a `docker` closure did not work for me   agent {     docker {       image 'foo'       args '--env USER_ID=$(id -u){{ --env GROUP_ID=$(id -g)'     }    } The values passed were literally " $(id -u) " (not interpreted. I'll give it a go and see.

          Waldek M added a comment -

          Just for the record: Dockerfile workaround worked fine. Thank you!

          Waldek M added a comment - Just for the record: Dockerfile workaround worked fine. Thank you!

          Michael Slattery added a comment - - edited

          This workaround worked for me without having to do the Dockerfile workaround.

          environment {
              JAVA_OPTS="-Duser.home=${JENKINS_HOME}"
              MAVEN_OPTS="${JAVA_OPTS}"
              MAVEN_CONFIG="${JENKINS_HOME}/.m2"  // docker/maven specific.
          }
          agent {
              docker {
                  image 'buildtool'
                  args "-e HOME=${JENKINS_HOME}"
              }
          }
          

          I prefer this solution as it universally works with all containers (so far) and we use a few off-the-shelf images that I'd rather not heavily modify.

          I believe most tools will work, including maven, gradle, pip, npm, git, etc.

          Michael Slattery added a comment - - edited This workaround worked for me without having to do the Dockerfile workaround. environment { JAVA_OPTS= "-Duser.home=${JENKINS_HOME}" MAVEN_OPTS= "${JAVA_OPTS}" MAVEN_CONFIG= "${JENKINS_HOME}/.m2" // docker/maven specific. } agent { docker { image 'buildtool' args "-e HOME=${JENKINS_HOME}" } } I prefer this solution as it universally works with all containers (so far) and we use a few off-the-shelf images that I'd rather not heavily modify. I believe most tools will work, including maven, gradle, pip, npm, git, etc.

          One way you can solve this is by mounting the /etc/passwd file from the Docker host into the container within the docker block in the Jenkins Pipeline configuration.

          args '-v /etc/passwd:/etc/passwd:ro'

          Daniel Sorensen added a comment - One way you can solve this is by mounting the /etc/passwd file from the Docker host into the container within the docker block in the Jenkins Pipeline configuration. args '-v /etc/passwd:/etc/passwd:ro'

          Waldek M added a comment -

          Depends on your setup; it won't work if you're using LDAP or any other external authentication service.

          Waldek M added a comment - Depends on your setup; it won't work if you're using LDAP or any other external authentication service.

          a b added a comment -

          mslattery If I'm understanding your suggestion correctly I believe this just sets the home / working dir for the particular tool to the mapped Jenkins workspace?

          If so that might work on a tool by tool basis in some cases but I don't believe this would solve the root issue for programs like SSH which rely on proper entries in /etc/passwd at a minimum.

          a b added a comment - mslattery If I'm understanding your suggestion correctly I believe this just sets the home / working dir for the particular tool to the mapped Jenkins workspace? If so that might work on a tool by tool basis in some cases but I don't believe this would solve the root issue for programs like SSH which rely on proper entries in /etc/passwd at a minimum.

          a b added a comment -

          We are running from a pre-built image right now for various reasons and ended up doing the groupadd / useradd method but have had to hard-code the details in our base Dockerfiles / layers before build. Very unfortunate workaround. Hopefully this gets fixed at some point.

          a b added a comment - We are running from a pre-built image right now for various reasons and ended up doing the groupadd / useradd method but have had to hard-code the details in our base Dockerfiles / layers before build. Very unfortunate workaround. Hopefully this gets fixed at some point.

          a b added a comment -

          weakcamel did you ever get the following method to work somehow? I am in the same position and we aren't able to use dockerfiles right now.
          agent {
              docker {
                image 'foo'      args '--env USER_ID=$(id -u){{ --env GROUP_ID=$(id -g)'    }   
          }

          a b added a comment - weakcamel  did you ever get the following method to work somehow? I am in the same position and we aren't able to use dockerfiles right now. agent {     docker {       image 'foo'      args '--env USER_ID=$(id -u){{ --env GROUP_ID=$(id -g)'    }    }

            Unassigned Unassigned
            edahlseng Eric Dahlseng
            Votes:
            6 Vote for this issue
            Watchers:
            12 Start watching this issue

              Created:
              Updated: