Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-49076

Cannot set custom PATH inside docker container

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Major Major
    • docker-workflow-plugin
    • None
    • Jenkins 2.89.3
      docker-workflow 1.14

      I'm trying to set a custom PATH in a docker.image('...').inside block.

      For example, I would like to be able to do something like this:

      node('docker') {
          docker.image('some-build-image').inside {
              sh 'echo $PATH'
              withEnv(['PATH+MAVEN=/opt/maven-3.3.3/bin']) {
                  sh 'echo $PATH'
                  sh 'mvn --version'
              }
          }
      }
      

      But the PATH environment variable inside the docker image does not get updated - the two echo statements produce exactly the same output, and the Maven command fails with the following error: "mvn: command not found"

      I see that as a result of #JENKINS-43590, the PATH env var is no longer passed from the host to the docker container (which seems sensible, as the environments can be different), but I feel it should still possible to manipulate the PATH variable inside the docker container somehow, e.g by using withEnv. Even a workaround like running the shell step sh 'export PATH=$PATH:/opt/maven-3.3.3/bin' does not have the required outcome.

          [JENKINS-49076] Cannot set custom PATH inside docker container

          Jesse Glick added a comment -

          Avoid doing this. If you cannot just define an image which has the desired PATH to begin with, better to avoid withDockerContainer and do whatever you need using sh 'docker …' directly.

          Jesse Glick added a comment - Avoid doing this. If you cannot just define an image which has the desired PATH to begin with, better to avoid withDockerContainer and do whatever you need using sh 'docker …' directly.

          Hello guys. I've met the same issue and here is my workaround.

          The basic idea is to create a temporary image that uses a Dockerfile with a modified PATH.

          writeFile(file: 'Dockerfile', text: """
              FROM 'basic-image'
              ENV PATH=\"/path/to/the/tools:\${PATH}\"
          """)
          
          def сontainer = docker.build('from-basic-image-tmp')
          сontainer.inside {
              sh 'echo ${PATH}'
          }
          

          Sergey Seroshtan added a comment - Hello guys. I've met the same issue and here is my workaround. The basic idea is to create a temporary image that uses a Dockerfile with a modified PATH. writeFile(file: 'Dockerfile' , text: """ FROM 'basic-image' ENV PATH=\ "/path/to/the/tools:\${PATH}\" """) def сontainer = docker.build( 'from-basic-image-tmp' ) сontainer.inside { sh 'echo ${PATH}' }

          Michael Musenbrock added a comment - - edited

          As having the same issue currently, I stumbled over this issue.
          I wanted to hook into jglicks answer to that one and want to to give some additional thoughts/examples to the answer.
          First, which of course could resolved non-technical, but is sometimes a pita, is that for some envs/customers/... you can't control the images and may need to work what is available on a private registry. So it would make the life way much easier and the pipelines cleaner, if you have the ability to define/add something to the current PATH variable.
          And secondly IMHO more important: why should two exact same pipelines, running both on a clean debian stable, except one in the container, one on the normal agent behave differently?

          // WORKING pipeline
          pipeline {
              agent {
                  // runs debian stable
                  label 'master'
              }
              stages {
                  stage('ENV test') {
                      steps {
                          sh "echo 'echo hello' > test_script"
                          sh "chmod +x ./test_script"
                          withEnv(["PATH+EXTRA=${WORKSPACE}"]) {
                              sh "env | grep PATH"
                              sh "test_script"
                          }
                      }
                  }
              }
          }
          
          // FAILING pipeline
          pipeline {
              agent {
                  docker {
                      image 'debian:stable'
                      label 'master'
                  }
              }
              stages {
                  stage('ENV test') {
                      steps {
                          sh "echo 'echo hello' > test_script"
                          sh "chmod +x ./test_script"
                          withEnv(["PATH+EXTRA=${WORKSPACE}"]) {
                              sh "env | grep PATH"
                              sh "test_script"
                          }
                      }
                  }
              }
          }
          

          Michael Musenbrock added a comment - - edited As having the same issue currently, I stumbled over this issue. I wanted to hook into jglick s answer to that one and want to to give some additional thoughts/examples to the answer. First, which of course could resolved non-technical, but is sometimes a pita, is that for some envs/customers/... you can't control the images and may need to work what is available on a private registry. So it would make the life way much easier and the pipelines cleaner, if you have the ability to define/add something to the current PATH variable. And secondly IMHO more important: why should two exact same pipelines, running both on a clean debian stable, except one in the container, one on the normal agent behave differently? // WORKING pipeline pipeline { agent { // runs debian stable label 'master' } stages { stage( 'ENV test' ) { steps { sh "echo 'echo hello' > test_script" sh "chmod +x ./test_script" withEnv([ "PATH+EXTRA=${WORKSPACE}" ]) { sh "env | grep PATH" sh "test_script" } } } } } // FAILING pipeline pipeline { agent { docker { image 'debian:stable' label 'master' } } stages { stage( 'ENV test' ) { steps { sh "echo 'echo hello' > test_script" sh "chmod +x ./test_script" withEnv([ "PATH+EXTRA=${WORKSPACE}" ]) { sh "env | grep PATH" sh "test_script" } } } } }

          Tim Brown added a comment - - edited

          sseroshtan Thanks for the workaround, it's solved a ton of Jenkins related docker issues for us.

          That said, we still hit this once the container is running. We want to install command-line tools in a virtual env, but hit this issue when adding them to path.

          I agree with redeamer that I would expect withEnv to work the same inside and outside the withDockerContainer context. If it can't be made to work, then it should error so users don't spend time debugging issues.

          Tim Brown added a comment - - edited sseroshtan Thanks for the workaround, it's solved a ton of Jenkins related docker issues for us. That said, we still hit this once the container is running. We want to install command-line tools in a virtual env, but hit this issue when adding them to path. I agree with redeamer that I would expect withEnv to work the same inside and outside the withDockerContainer context. If it can't be made to work, then it should error so users don't spend time debugging issues.

          Peter Bauer added a comment -

          Stumbled over the same issue. Is there any technical reason why this would be hard to fix/implement? It is at least inconsistent that it works for all other environment variables, and the workarounds have quite an impact performance-wise and/or logic-wise.

          Peter Bauer added a comment - Stumbled over the same issue. Is there any technical reason why this would be hard to fix/implement? It is at least inconsistent that it works for all other environment variables, and the workarounds have quite an impact performance-wise and/or logic-wise.

          Raphael added a comment - - edited

          Avoid doing this.

          Love to! If this wasn't the end in a chain of dirty workarounds.

          Maybe I'm being dumb; here's my use case.
          There's some anger here because I've been wasting most of two workdays on getting what I think is a fairly simple setup running. Nothing personal, and I'm happy to consider alternative approaches.

          • Root agent none (don't want to block any agent slot I'm not using, since ...)
          • All stages run with agent { docker }
          • Run pip install against a venv in the first stage.
          • Then run, say, pytest and flake8 in parallel stages using that same venv.

          Here's my thought process, abbreviated:

          • I can get the venv to later stages using stash, okay.
          • I already gave up on parallel since Jenkins insists on renaming the workspace mount inside the container for some reason, and at the same time venv hard-codes paths.
          • reuseNode true – not my favorite (prefer to isolate stages) but didn't seem to work, anyway.
          • Have to activate the venv now!
            • source .venv/bin/activate – never mind, Jenkins uses `sh`.
            • Switching to bash for each sh step? Fugly as hell.
            • export PATH=... in each sh step? Also fugly.
            • Set PATH in the Docker image – doesn't help since Jenkins doesn't honor WORKDIR and unstash doesn't preserve absolute paths.

          So I end up in dropping parallel and at least wanting to set PATH once for all stages.
          Only that doesn't work since WORKSPACE isn't set at the top level, and then docker inspect hangs for some reason.

          So can I please just use withEnv(pythonEnv(env.WORKSPACE)) with my little helper function?

          You can see how there are any number of changes that could be made to whichever component of Jenkins to solve this use case better. But it's where I'm at, and I'm burning time.

          In the meantime, I guess I'll write my own step functions that hide away all the fuglyness to make a brittle, half-assed pipeline at least somewhat readable. Sigh.

          Update: FWIW, I'm now prepending all scripts that should run with a venv with this:

          """
          #!/usr/bin/env bash
          set -eu
          source <(sed 's%^VIRTUAL_ENV=.*$%VIRTUAL_ENV="'"\$(realpath .venv)"'"%' .venv/bin/activate)
          """.stripIndent().stripLeading()
          

          Is it pretty? No, but I hid it so deep in a library that it shall soon be forgotten.

          Raphael added a comment - - edited Avoid doing this. Love to! If this wasn't the end in a chain of dirty workarounds. Maybe I'm being dumb; here's my use case. There's some anger here because I've been wasting most of two workdays on getting what I think is a fairly simple setup running. Nothing personal, and I'm happy to consider alternative approaches. Root agent none (don't want to block any agent slot I'm not using, since ...) All stages run with agent { docker } Run pip install against a venv in the first stage. Then run, say, pytest and flake8 in parallel stages using that same venv. Here's my thought process, abbreviated: I can get the venv to later stages using stash , okay. I already gave up on parallel since Jenkins insists on renaming the workspace mount inside the container for some reason, and at the same time venv hard-codes paths. reuseNode true – not my favorite (prefer to isolate stages) but didn't seem to work, anyway. Have to activate the venv now! source .venv/bin/activate – never mind, Jenkins uses `sh`. Switching to bash for each sh step? Fugly as hell. export PATH=... in each sh step? Also fugly. Set PATH in the Docker image – doesn't help since Jenkins doesn't honor WORKDIR and unstash doesn't preserve absolute paths. So I end up in dropping parallel and at least wanting to set PATH once for all stages. Only that doesn't work since WORKSPACE isn't set at the top level, and then docker inspect hangs for some reason. So can I please just use withEnv(pythonEnv(env.WORKSPACE)) with my little helper function? You can see how there are any number of changes that could be made to whichever component of Jenkins to solve this use case better. But it's where I'm at, and I'm burning time. In the meantime, I guess I'll write my own step functions that hide away all the fuglyness to make a brittle, half-assed pipeline at least somewhat readable. Sigh. — Update: FWIW, I'm now prepending all scripts that should run with a venv with this: """ #!/usr/bin/env bash set -eu source <(sed 's%^VIRTUAL_ENV=.*$%VIRTUAL_ENV= "' " \$(realpath .venv) " '" %' .venv/bin/activate) """.stripIndent().stripLeading() Is it pretty? No, but I hid it so deep in a library that it shall soon be forgotten.

          Jan Gałda added a comment - - edited

          Hi I also spent a lot of time to workaround this. My solution is to create a simple script which exports PATH and then set BASH_ENV variable.

          Basic example:

          def withPath(String additionalPath, Closure closure) {
              writeFile(file: '.withPath.bashrc', text: "export PATH=\$PATH:$additionalPath")
              withEnv(["BASH_ENV=.withPath.bashrc"]) {
                  closure()
              }
          }
          
          node('myNode') {
              docker.image("ubuntu:latest").inside() {
                  withPath('/directory/which/should/be/in/path') {
                      sh('env')
                  }
              }
          } 

          Of course, you can add some cleanup in try/finally if needed.

          Jan Gałda added a comment - - edited Hi I also spent a lot of time to workaround this. My solution is to create a simple script which exports PATH and then set BASH_ENV variable. Basic example: def withPath( String additionalPath, Closure closure) {     writeFile(file: '.withPath.bashrc' , text: "export PATH=\$PATH:$additionalPath" )     withEnv([ "BASH_ENV=.withPath.bashrc" ]) {         closure()     } } node( 'myNode' ) {     docker.image( "ubuntu:latest" ).inside() {         withPath( '/directory/which/should/be/in/path' ) {             sh( 'env' )         }     } } Of course, you can add some cleanup in try/finally if needed.

          Luka added a comment -

          jgalda I tried it just now, the same way you did it, but the `PATH` variable still remains unchanged.

          I can `cat` the file from within the `withEnv()` block, and even `source` it*, and yet, the `PATH` variable is the same as it was before the block.

          Using absolute paths does not help, nor does using the variable `ENV` instead of `BASH_ENV` (since the underlying shell is `sh`) and I'm all out of ideas except for hardcoding the path through the `-e PATH=whateverIwant` parameter when launching the `docker container`, which will have to do for now.

           

          Have you stumbled upon any other workarounds in the meantime maybe?

           

           

          • I know sourcing doesn't change the `PATH` for subsequent `sh` invocations, but I tried `. /path/to/.withPath.bashrc && echo $PATH` and that worked as expected, the `PATH` was updated.

          Luka added a comment - jgalda I tried it just now, the same way you did it, but the `PATH` variable still remains unchanged. I can `cat` the file from within the `withEnv()` block, and even `source` it*, and yet, the `PATH` variable is the same as it was before the block. Using absolute paths does not help, nor does using the variable `ENV` instead of `BASH_ENV` (since the underlying shell is `sh`) and I'm all out of ideas except for hardcoding the path through the `-e PATH=whateverIwant` parameter when launching the `docker container`, which will have to do for now.   Have you stumbled upon any other workarounds in the meantime maybe?     I know sourcing doesn't change the `PATH` for subsequent `sh` invocations, but I tried `. /path/to/.withPath.bashrc && echo $PATH` and that worked as expected, the `PATH` was updated.

          Jan Gałda added a comment -

          markmarkmark how do you check value of PATH? There is a few ways of do it, and you can get node's or docker's PATH?

          For example:

          sh('env | grep PATH=') // prints docker's PATH
          println("PRINTLN $PATH") // prints node's PATH
          echo("ECHO $PATH") // prints node's PATH
          sh('echo $PATH') // prints docker's PATH
          sh("echo $PATH") // prints node's PATH

          Jan Gałda added a comment - markmarkmark how do you check value of PATH? There is a few ways of do it, and you can get node's or docker's PATH? For example: sh( 'env | grep PATH=' ) // prints docker's PATH println( "PRINTLN $PATH" ) // prints node's PATH echo( "ECHO $PATH" ) // prints node's PATH sh( 'echo $PATH' ) // prints docker's PATH sh( "echo $PATH" ) // prints node's PATH

          Luka added a comment -

          jgalda I tried all five of them:

          def withPathBashRC = "${pwd()}/.withPath.bashrc"
          writeFile( file: withPathBashRC, text: "export PATH=/path/to/python/venv/bin:\$PATH" )
          withEnv([ "PATH=${env.PATH}:asdfffffff", "BASH_ENV=${withPathBashRC}", "ENV=${withPathBashRC}" ] ) {
              sh('env | grep PATH=')   // missing the part I want added
              println("PRINTLN $PATH") // contains asdfffffff
              echo("ECHO $PATH")       // contains asdfffffff
              sh('echo $PATH')         // contains asdfffffff
              sh("echo $PATH")         // missing the part I want added
          }

          Which means it only gets applied to the node's PATH for some reason. Since I'm trying to add the path to a python virtual env, this works as expected:

          sh('which python')
          /usr/bin/python
          
          sh('. $BASH_ENV && which python')
          /path/to/venv/bin/python

          Luka added a comment - jgalda I tried all five of them: def withPathBashRC = "${pwd()}/.withPath.bashrc" writeFile( file: withPathBashRC, text: "export PATH=/path/to/python/venv/bin:\$PATH" ) withEnv([ "PATH=${env.PATH}:asdfffffff", "BASH_ENV=${withPathBashRC}", "ENV=${withPathBashRC}" ] ) { sh('env | grep PATH=') // missing the part I want added println("PRINTLN $PATH") // contains asdfffffff echo("ECHO $PATH") // contains asdfffffff sh('echo $PATH') // contains asdfffffff sh("echo $PATH") // missing the part I want added } Which means it only gets applied to the node's PATH for some reason. Since I'm trying to add the path to a python virtual env, this works as expected: sh( 'which python' ) /usr/bin/python sh( '. $BASH_ENV && which python' ) /path/to/venv/bin/python

            Unassigned Unassigned
            robin_smith Robin Smith
            Votes:
            21 Vote for this issue
            Watchers:
            27 Start watching this issue

              Created:
              Updated: