Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-49076

Cannot set custom PATH inside docker container

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Major Major
    • docker-workflow-plugin
    • None
    • Jenkins 2.89.3
      docker-workflow 1.14

      I'm trying to set a custom PATH in a docker.image('...').inside block.

      For example, I would like to be able to do something like this:

      node('docker') {
          docker.image('some-build-image').inside {
              sh 'echo $PATH'
              withEnv(['PATH+MAVEN=/opt/maven-3.3.3/bin']) {
                  sh 'echo $PATH'
                  sh 'mvn --version'
              }
          }
      }
      

      But the PATH environment variable inside the docker image does not get updated - the two echo statements produce exactly the same output, and the Maven command fails with the following error: "mvn: command not found"

      I see that as a result of #JENKINS-43590, the PATH env var is no longer passed from the host to the docker container (which seems sensible, as the environments can be different), but I feel it should still possible to manipulate the PATH variable inside the docker container somehow, e.g by using withEnv. Even a workaround like running the shell step sh 'export PATH=$PATH:/opt/maven-3.3.3/bin' does not have the required outcome.

          [JENKINS-49076] Cannot set custom PATH inside docker container

          Kieran Webber added a comment -

          Just ran into this issue using withEnv in a docker container. Works for everything but PATH currently.

          Kieran Webber added a comment - Just ran into this issue using withEnv in a docker container. Works for everything but PATH currently.

          Steven Clark added a comment - - edited

          I believe this may be the root cause as well with the failures I'm seeing within a declarative pipeline. Running this PATH from the env command is the normal value and not the value within my pipeline. This only seems to break when the agent is a docker

          pipeline {
              agent {
                  docker {
                      image 'debian:stretch'
                  }
              }
              
              environment {
                  PATH = "/opt/test/bin:/usr/bin/:/bin"
              }
              
              stages {
                  stage('Example') {
                      steps {
                          sh 'env'
                      }
                  }
              }
          }

          Steven Clark added a comment - - edited I believe this may be the root cause as well with the failures I'm seeing within a declarative pipeline. Running this PATH from the env command is the normal value and not the value within my pipeline. This only seems to break when the agent is a docker pipeline {     agent {         docker {             image 'debian:stretch'         }     }          environment {         PATH = "/opt/test/bin:/usr/bin/:/bin"     }          stages {         stage( 'Example' ) {             steps {                 sh 'env'             }         }     } }

          Rong Shen added a comment - - edited

          I'm seeing same issue here.

          It is definitely necessary to have a way to set PATH variable inside docker container. We have a PATH environment variable is dynamic changing with Jenkins workspace path that can only set through code.

          Is there a workaround for this issue?

          Rong Shen added a comment - - edited I'm seeing same issue here. It is definitely necessary to have a way to set PATH variable inside docker container. We have a PATH environment variable is dynamic changing with Jenkins workspace path that can only set through code. Is there a workaround for this issue?

          I did something like this as workaround:

              // Get the full PATH variable from a started container.
              def pathInContainer
              docker.image(DockerImage).inside() {
                pathInContainer = steps.sh(script: 'echo $PATH', returnStdout: true).trim()
                parameters += "-e PATH=${pathInContainer}:/misc/AddExtraPath "
              }    
          
              docker.image(DockerImage)
                          .inside(parameters) {
                runA()
                runB()
              }
           

          It extracts the PATH from a started container, then stops the container, and starts a new one with the extra PATH environment variable (which I extended with the things required by us).

          Paul Theunissen added a comment - I did something like this as workaround: // Get the full PATH variable from a started container. def pathInContainer docker.image(DockerImage).inside() { pathInContainer = steps.sh(script: 'echo $PATH' , returnStdout: true ).trim() parameters += "-e PATH=${pathInContainer}:/misc/AddExtraPath " } docker.image(DockerImage) .inside(parameters) { runA() runB() } It extracts the PATH from a started container, then stops the container, and starts a new one with the extra PATH environment variable (which I extended with the things required by us).

          Rong Shen added a comment -

          Thank you theuno! Let me have a try

          Rong Shen added a comment - Thank you theuno ! Let me have a try

          Nenad Miksa added a comment -

          Any update on this issue? The workaround by theuno works, but its very ugly and inpractical for some use cases.

           

          Nenad Miksa added a comment - Any update on this issue? The workaround by theuno works, but its very ugly and inpractical for some use cases.  

          Another workaround is to run commands in a modified environment:

          pipeline {
              agent {
                  docker {
                      image 'debian'
                  }
              }
              environment {
                  PATH = '/some/dir'
              }
              stages {
                  stage('Test') {
                      steps {
                          sh "env PATH=$PATH:\$PATH"
                      }
                  }
              }
          }
          

          The resulting PATH variable contains /some/dir prepended to the PATH value defined inside the container (observe double quotes in the steps directive).

          I fully agree with the others, that the option to manipulate the PATH variable inside a container is a must-have. Note, that this feature has nothing to do with setting the PATH inside a container to the value this variable has outside of it, i.e. on the Jenkins node, which would be obviously wrong in most cases.

          Lukasz Walewski added a comment - Another workaround is to run commands in a modified environment: pipeline { agent { docker { image 'debian' } } environment { PATH = '/some/dir' } stages { stage( 'Test' ) { steps { sh "env PATH=$PATH:\$PATH" } } } } The resulting PATH variable contains /some/dir prepended to the PATH value defined inside the container (observe double quotes in the steps directive). I fully agree with the others, that the option to manipulate the PATH variable inside a container is a must-have. Note, that this feature has nothing to do with setting the PATH inside a container to the value this variable has outside of it, i.e. on the Jenkins node, which would be obviously wrong in most cases.

          It now works to set custom PATH inside Docker container in a declarative pipeline by using the environment argument -e to the Docker commands.

          e.g.

          agent {
              docker {
                image 'amazoncorretto:8u202'
                label 'docker'
                args '''
                  -v $HOME/tools:$HOME/tools
                  -v $HOME/.m2/:$HOME/.m2
                  -e PATH="$PATH:/var/lib/jenkins/tools/hudson.tasks.Maven_MavenInstallation/MVN-360/bin"
                '''
              } 
          
          
          [Pipeline] sh
          + env
          + grep PATH
          PATH=/sbin:/usr/sbin:/bin:/usr/bin:/usr/local/bin:/var/lib/jenkins/tools/hudson.tasks.Maven_MavenInstallation/MVN-360/bin 

          Daniel Sorensen added a comment - It now works to set custom PATH inside Docker container in a declarative pipeline by using the environment argument -e to the Docker commands. e.g. agent { docker { image 'amazoncorretto:8u202' label 'docker' args ''' -v $HOME/tools:$HOME/tools -v $HOME/.m2/:$HOME/.m2 -e PATH= "$PATH:/ var /lib/jenkins/tools/hudson.tasks.Maven_MavenInstallation/MVN-360/bin" ''' } [Pipeline] sh + env + grep PATH PATH=/sbin:/usr/sbin:/bin:/usr/bin:/usr/local/bin:/ var /lib/jenkins/tools/hudson.tasks.Maven_MavenInstallation/MVN-360/bin

          dsorensen, indeed, it works, and that's a good example.

          I just want to note that passing environment argument to the docker command will override original PATH value defined in image with the one from Jenkins host, and in some conditions it may produce not the result you expect (this was actually described in JENKINS-43590). Here is such example:

          pipeline {
              agent any
          
              environment {
                  NODEJS_VERSION = "NodeJS 12.16.2"
              }
          
              stages {
                  stage('Nodejs inside OpenJDK container 1') {
                      agent {
                          docker {
                              reuseNode true
                              image 'openjdk:11.0-jdk-slim'
                              args """
                                  -v ${JENKINS_HOME}/tools:${JENKINS_HOME}/tools
                                  -e PATH="${PATH}:${tool("${NODEJS_VERSION}")}/bin"
                              """
                              // Original PATH from openjdk image:
                              // PATH=/usr/local/openjdk-11/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
                          }
                      }
                      tools { nodejs "${NODEJS_VERSION}" }
          
                      environment {
                          PATH_JENKINS_ORIGINAL = "${PATH}"
                      }
          
                      steps {
                          sh 'env | sort'
                          sh 'node --version'
          
                          catchError(buildResult: 'SUCCESS',
                                     stageResult: 'FAILURE',
                                     message: 'cannot find java') {
                              sh 'java --version'
                          }
                      }
                  }
          
                  stage('Nodejs inside OpenJDK container 2') {
                      agent {
                          docker {
                              reuseNode true
                              image 'openjdk:11.0-jdk-slim'
                              args """
                                  -v ${JENKINS_HOME}/tools:${JENKINS_HOME}/tools
                              """
                          }
                      }
          
                      tools { nodejs "${NODEJS_VERSION}" }
          
                      environment {
                          NODEJS_BIN_PATH = "${tool("${NODEJS_VERSION}")}/bin"
                          PATH_JENKINS_ORIGINAL = "${PATH}"
                      }
          
                      steps {
                          sh '''
                              env | sort
                              export PATH=${PATH}:${NODEJS_BIN_PATH}
                              node --version
                              java --version
                          '''
                      }
                  }
              }
          }
          

           

          Oleksandr Shmyrko added a comment - dsorensen , indeed, it works, and that's a good example. I just want to note that passing environment argument to the docker command will override original PATH value defined in image with the one from Jenkins host, and in some conditions it may produce not the result you expect (this was actually described in JENKINS-43590 ). Here is such example: pipeline { agent any environment { NODEJS_VERSION = "NodeJS 12.16.2" } stages { stage( 'Nodejs inside OpenJDK container 1' ) { agent { docker { reuseNode true image 'openjdk:11.0-jdk-slim' args """ -v ${JENKINS_HOME}/tools:${JENKINS_HOME}/tools -e PATH= "${PATH}:${tool(" ${NODEJS_VERSION} ")}/bin" """ // Original PATH from openjdk image: // PATH=/usr/local/openjdk-11/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin } } tools { nodejs "${NODEJS_VERSION}" } environment { PATH_JENKINS_ORIGINAL = "${PATH}" } steps { sh 'env | sort' sh 'node --version' catchError(buildResult: 'SUCCESS' , stageResult: 'FAILURE' , message: 'cannot find java' ) { sh 'java --version' } } } stage( 'Nodejs inside OpenJDK container 2' ) { agent { docker { reuseNode true image 'openjdk:11.0-jdk-slim' args """ -v ${JENKINS_HOME}/tools:${JENKINS_HOME}/tools """ } } tools { nodejs "${NODEJS_VERSION}" } environment { NODEJS_BIN_PATH = "${tool(" ${NODEJS_VERSION} ")}/bin" PATH_JENKINS_ORIGINAL = "${PATH}" } steps { sh ''' env | sort export PATH=${PATH}:${NODEJS_BIN_PATH} node --version java --version ''' } } } }  

          Nuno Costa added a comment -

          All solutions I found related with environment variables inside Docker image, in a Declarative pipeline, the only one It worked properly was dsorensen, by passing PATH in docker args.

          It is duplicated effort when all the environment is already set inside the container but is the quickest way of handling env variables.

          Tried jglick suggestion but did not work.

          Did not checked if a issue was opened for this specific issue in declarative pipeline. Will try to create one later, if does not exist.

          Nuno Costa added a comment - All solutions I found related with environment variables inside Docker image, in a Declarative pipeline, the only one It worked properly was dsorensen , by passing PATH in docker args. It is duplicated effort when all the environment is already set inside the container but is the quickest way of handling env variables. Tried jglick suggestion but did not work. Did not checked if a issue was opened for this specific issue in declarative pipeline. Will try to create one later, if does not exist.

          Jesse Glick added a comment -

          Avoid doing this. If you cannot just define an image which has the desired PATH to begin with, better to avoid withDockerContainer and do whatever you need using sh 'docker …' directly.

          Jesse Glick added a comment - Avoid doing this. If you cannot just define an image which has the desired PATH to begin with, better to avoid withDockerContainer and do whatever you need using sh 'docker …' directly.

          Hello guys. I've met the same issue and here is my workaround.

          The basic idea is to create a temporary image that uses a Dockerfile with a modified PATH.

          writeFile(file: 'Dockerfile', text: """
              FROM 'basic-image'
              ENV PATH=\"/path/to/the/tools:\${PATH}\"
          """)
          
          def сontainer = docker.build('from-basic-image-tmp')
          сontainer.inside {
              sh 'echo ${PATH}'
          }
          

          Sergey Seroshtan added a comment - Hello guys. I've met the same issue and here is my workaround. The basic idea is to create a temporary image that uses a Dockerfile with a modified PATH. writeFile(file: 'Dockerfile' , text: """ FROM 'basic-image' ENV PATH=\ "/path/to/the/tools:\${PATH}\" """) def сontainer = docker.build( 'from-basic-image-tmp' ) сontainer.inside { sh 'echo ${PATH}' }

          Michael Musenbrock added a comment - - edited

          As having the same issue currently, I stumbled over this issue.
          I wanted to hook into jglicks answer to that one and want to to give some additional thoughts/examples to the answer.
          First, which of course could resolved non-technical, but is sometimes a pita, is that for some envs/customers/... you can't control the images and may need to work what is available on a private registry. So it would make the life way much easier and the pipelines cleaner, if you have the ability to define/add something to the current PATH variable.
          And secondly IMHO more important: why should two exact same pipelines, running both on a clean debian stable, except one in the container, one on the normal agent behave differently?

          // WORKING pipeline
          pipeline {
              agent {
                  // runs debian stable
                  label 'master'
              }
              stages {
                  stage('ENV test') {
                      steps {
                          sh "echo 'echo hello' > test_script"
                          sh "chmod +x ./test_script"
                          withEnv(["PATH+EXTRA=${WORKSPACE}"]) {
                              sh "env | grep PATH"
                              sh "test_script"
                          }
                      }
                  }
              }
          }
          
          // FAILING pipeline
          pipeline {
              agent {
                  docker {
                      image 'debian:stable'
                      label 'master'
                  }
              }
              stages {
                  stage('ENV test') {
                      steps {
                          sh "echo 'echo hello' > test_script"
                          sh "chmod +x ./test_script"
                          withEnv(["PATH+EXTRA=${WORKSPACE}"]) {
                              sh "env | grep PATH"
                              sh "test_script"
                          }
                      }
                  }
              }
          }
          

          Michael Musenbrock added a comment - - edited As having the same issue currently, I stumbled over this issue. I wanted to hook into jglick s answer to that one and want to to give some additional thoughts/examples to the answer. First, which of course could resolved non-technical, but is sometimes a pita, is that for some envs/customers/... you can't control the images and may need to work what is available on a private registry. So it would make the life way much easier and the pipelines cleaner, if you have the ability to define/add something to the current PATH variable. And secondly IMHO more important: why should two exact same pipelines, running both on a clean debian stable, except one in the container, one on the normal agent behave differently? // WORKING pipeline pipeline { agent { // runs debian stable label 'master' } stages { stage( 'ENV test' ) { steps { sh "echo 'echo hello' > test_script" sh "chmod +x ./test_script" withEnv([ "PATH+EXTRA=${WORKSPACE}" ]) { sh "env | grep PATH" sh "test_script" } } } } } // FAILING pipeline pipeline { agent { docker { image 'debian:stable' label 'master' } } stages { stage( 'ENV test' ) { steps { sh "echo 'echo hello' > test_script" sh "chmod +x ./test_script" withEnv([ "PATH+EXTRA=${WORKSPACE}" ]) { sh "env | grep PATH" sh "test_script" } } } } }

          Tim Brown added a comment - - edited

          sseroshtan Thanks for the workaround, it's solved a ton of Jenkins related docker issues for us.

          That said, we still hit this once the container is running. We want to install command-line tools in a virtual env, but hit this issue when adding them to path.

          I agree with redeamer that I would expect withEnv to work the same inside and outside the withDockerContainer context. If it can't be made to work, then it should error so users don't spend time debugging issues.

          Tim Brown added a comment - - edited sseroshtan Thanks for the workaround, it's solved a ton of Jenkins related docker issues for us. That said, we still hit this once the container is running. We want to install command-line tools in a virtual env, but hit this issue when adding them to path. I agree with redeamer that I would expect withEnv to work the same inside and outside the withDockerContainer context. If it can't be made to work, then it should error so users don't spend time debugging issues.

          Peter Bauer added a comment -

          Stumbled over the same issue. Is there any technical reason why this would be hard to fix/implement? It is at least inconsistent that it works for all other environment variables, and the workarounds have quite an impact performance-wise and/or logic-wise.

          Peter Bauer added a comment - Stumbled over the same issue. Is there any technical reason why this would be hard to fix/implement? It is at least inconsistent that it works for all other environment variables, and the workarounds have quite an impact performance-wise and/or logic-wise.

          Raphael added a comment - - edited

          Avoid doing this.

          Love to! If this wasn't the end in a chain of dirty workarounds.

          Maybe I'm being dumb; here's my use case.
          There's some anger here because I've been wasting most of two workdays on getting what I think is a fairly simple setup running. Nothing personal, and I'm happy to consider alternative approaches.

          • Root agent none (don't want to block any agent slot I'm not using, since ...)
          • All stages run with agent { docker }
          • Run pip install against a venv in the first stage.
          • Then run, say, pytest and flake8 in parallel stages using that same venv.

          Here's my thought process, abbreviated:

          • I can get the venv to later stages using stash, okay.
          • I already gave up on parallel since Jenkins insists on renaming the workspace mount inside the container for some reason, and at the same time venv hard-codes paths.
          • reuseNode true – not my favorite (prefer to isolate stages) but didn't seem to work, anyway.
          • Have to activate the venv now!
            • source .venv/bin/activate – never mind, Jenkins uses `sh`.
            • Switching to bash for each sh step? Fugly as hell.
            • export PATH=... in each sh step? Also fugly.
            • Set PATH in the Docker image – doesn't help since Jenkins doesn't honor WORKDIR and unstash doesn't preserve absolute paths.

          So I end up in dropping parallel and at least wanting to set PATH once for all stages.
          Only that doesn't work since WORKSPACE isn't set at the top level, and then docker inspect hangs for some reason.

          So can I please just use withEnv(pythonEnv(env.WORKSPACE)) with my little helper function?

          You can see how there are any number of changes that could be made to whichever component of Jenkins to solve this use case better. But it's where I'm at, and I'm burning time.

          In the meantime, I guess I'll write my own step functions that hide away all the fuglyness to make a brittle, half-assed pipeline at least somewhat readable. Sigh.

          Update: FWIW, I'm now prepending all scripts that should run with a venv with this:

          """
          #!/usr/bin/env bash
          set -eu
          source <(sed 's%^VIRTUAL_ENV=.*$%VIRTUAL_ENV="'"\$(realpath .venv)"'"%' .venv/bin/activate)
          """.stripIndent().stripLeading()
          

          Is it pretty? No, but I hid it so deep in a library that it shall soon be forgotten.

          Raphael added a comment - - edited Avoid doing this. Love to! If this wasn't the end in a chain of dirty workarounds. Maybe I'm being dumb; here's my use case. There's some anger here because I've been wasting most of two workdays on getting what I think is a fairly simple setup running. Nothing personal, and I'm happy to consider alternative approaches. Root agent none (don't want to block any agent slot I'm not using, since ...) All stages run with agent { docker } Run pip install against a venv in the first stage. Then run, say, pytest and flake8 in parallel stages using that same venv. Here's my thought process, abbreviated: I can get the venv to later stages using stash , okay. I already gave up on parallel since Jenkins insists on renaming the workspace mount inside the container for some reason, and at the same time venv hard-codes paths. reuseNode true – not my favorite (prefer to isolate stages) but didn't seem to work, anyway. Have to activate the venv now! source .venv/bin/activate – never mind, Jenkins uses `sh`. Switching to bash for each sh step? Fugly as hell. export PATH=... in each sh step? Also fugly. Set PATH in the Docker image – doesn't help since Jenkins doesn't honor WORKDIR and unstash doesn't preserve absolute paths. So I end up in dropping parallel and at least wanting to set PATH once for all stages. Only that doesn't work since WORKSPACE isn't set at the top level, and then docker inspect hangs for some reason. So can I please just use withEnv(pythonEnv(env.WORKSPACE)) with my little helper function? You can see how there are any number of changes that could be made to whichever component of Jenkins to solve this use case better. But it's where I'm at, and I'm burning time. In the meantime, I guess I'll write my own step functions that hide away all the fuglyness to make a brittle, half-assed pipeline at least somewhat readable. Sigh. — Update: FWIW, I'm now prepending all scripts that should run with a venv with this: """ #!/usr/bin/env bash set -eu source <(sed 's%^VIRTUAL_ENV=.*$%VIRTUAL_ENV= "' " \$(realpath .venv) " '" %' .venv/bin/activate) """.stripIndent().stripLeading() Is it pretty? No, but I hid it so deep in a library that it shall soon be forgotten.

          Jan Gałda added a comment - - edited

          Hi I also spent a lot of time to workaround this. My solution is to create a simple script which exports PATH and then set BASH_ENV variable.

          Basic example:

          def withPath(String additionalPath, Closure closure) {
              writeFile(file: '.withPath.bashrc', text: "export PATH=\$PATH:$additionalPath")
              withEnv(["BASH_ENV=.withPath.bashrc"]) {
                  closure()
              }
          }
          
          node('myNode') {
              docker.image("ubuntu:latest").inside() {
                  withPath('/directory/which/should/be/in/path') {
                      sh('env')
                  }
              }
          } 

          Of course, you can add some cleanup in try/finally if needed.

          Jan Gałda added a comment - - edited Hi I also spent a lot of time to workaround this. My solution is to create a simple script which exports PATH and then set BASH_ENV variable. Basic example: def withPath( String additionalPath, Closure closure) {     writeFile(file: '.withPath.bashrc' , text: "export PATH=\$PATH:$additionalPath" )     withEnv([ "BASH_ENV=.withPath.bashrc" ]) {         closure()     } } node( 'myNode' ) {     docker.image( "ubuntu:latest" ).inside() {         withPath( '/directory/which/should/be/in/path' ) {             sh( 'env' )         }     } } Of course, you can add some cleanup in try/finally if needed.

          Luka added a comment -

          jgalda I tried it just now, the same way you did it, but the `PATH` variable still remains unchanged.

          I can `cat` the file from within the `withEnv()` block, and even `source` it*, and yet, the `PATH` variable is the same as it was before the block.

          Using absolute paths does not help, nor does using the variable `ENV` instead of `BASH_ENV` (since the underlying shell is `sh`) and I'm all out of ideas except for hardcoding the path through the `-e PATH=whateverIwant` parameter when launching the `docker container`, which will have to do for now.

           

          Have you stumbled upon any other workarounds in the meantime maybe?

           

           

          • I know sourcing doesn't change the `PATH` for subsequent `sh` invocations, but I tried `. /path/to/.withPath.bashrc && echo $PATH` and that worked as expected, the `PATH` was updated.

          Luka added a comment - jgalda I tried it just now, the same way you did it, but the `PATH` variable still remains unchanged. I can `cat` the file from within the `withEnv()` block, and even `source` it*, and yet, the `PATH` variable is the same as it was before the block. Using absolute paths does not help, nor does using the variable `ENV` instead of `BASH_ENV` (since the underlying shell is `sh`) and I'm all out of ideas except for hardcoding the path through the `-e PATH=whateverIwant` parameter when launching the `docker container`, which will have to do for now.   Have you stumbled upon any other workarounds in the meantime maybe?     I know sourcing doesn't change the `PATH` for subsequent `sh` invocations, but I tried `. /path/to/.withPath.bashrc && echo $PATH` and that worked as expected, the `PATH` was updated.

          Jan Gałda added a comment -

          markmarkmark how do you check value of PATH? There is a few ways of do it, and you can get node's or docker's PATH?

          For example:

          sh('env | grep PATH=') // prints docker's PATH
          println("PRINTLN $PATH") // prints node's PATH
          echo("ECHO $PATH") // prints node's PATH
          sh('echo $PATH') // prints docker's PATH
          sh("echo $PATH") // prints node's PATH

          Jan Gałda added a comment - markmarkmark how do you check value of PATH? There is a few ways of do it, and you can get node's or docker's PATH? For example: sh( 'env | grep PATH=' ) // prints docker's PATH println( "PRINTLN $PATH" ) // prints node's PATH echo( "ECHO $PATH" ) // prints node's PATH sh( 'echo $PATH' ) // prints docker's PATH sh( "echo $PATH" ) // prints node's PATH

          Luka added a comment -

          jgalda I tried all five of them:

          def withPathBashRC = "${pwd()}/.withPath.bashrc"
          writeFile( file: withPathBashRC, text: "export PATH=/path/to/python/venv/bin:\$PATH" )
          withEnv([ "PATH=${env.PATH}:asdfffffff", "BASH_ENV=${withPathBashRC}", "ENV=${withPathBashRC}" ] ) {
              sh('env | grep PATH=')   // missing the part I want added
              println("PRINTLN $PATH") // contains asdfffffff
              echo("ECHO $PATH")       // contains asdfffffff
              sh('echo $PATH')         // contains asdfffffff
              sh("echo $PATH")         // missing the part I want added
          }

          Which means it only gets applied to the node's PATH for some reason. Since I'm trying to add the path to a python virtual env, this works as expected:

          sh('which python')
          /usr/bin/python
          
          sh('. $BASH_ENV && which python')
          /path/to/venv/bin/python

          Luka added a comment - jgalda I tried all five of them: def withPathBashRC = "${pwd()}/.withPath.bashrc" writeFile( file: withPathBashRC, text: "export PATH=/path/to/python/venv/bin:\$PATH" ) withEnv([ "PATH=${env.PATH}:asdfffffff", "BASH_ENV=${withPathBashRC}", "ENV=${withPathBashRC}" ] ) { sh('env | grep PATH=') // missing the part I want added println("PRINTLN $PATH") // contains asdfffffff echo("ECHO $PATH") // contains asdfffffff sh('echo $PATH') // contains asdfffffff sh("echo $PATH") // missing the part I want added } Which means it only gets applied to the node's PATH for some reason. Since I'm trying to add the path to a python virtual env, this works as expected: sh( 'which python' ) /usr/bin/python sh( '. $BASH_ENV && which python' ) /path/to/venv/bin/python

            Unassigned Unassigned
            robin_smith Robin Smith
            Votes:
            21 Vote for this issue
            Watchers:
            27 Start watching this issue

              Created:
              Updated: