Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-57452

docker.inside allows evaluation of environment variables containing backticks

      The issue occurs specifically when building Git pull requests and the pull request title (exposed in the environment as `CHANGE_TITLE`) contains expressions wrapped in backticks, which is a fairly common scenario.

      `docker.image(...).inside()` aims to re-expose all environment of the build within the container, which can be seen here: https://github.com/jenkinsci/docker-workflow-plugin/blob/docker-workflow-1.18/src/main/java/org/jenkinsci/plugins/docker/workflow/client/DockerClient.java#L123-L126

      among the variables available during build time (https://hudson.eclipse.org/webtools/env-vars.html/) information about the built pull request is also exposed. This leads to the pull request title being passed to the `docker run` command, backticks and all, which breaks the build. For example, with "CHANGE_TITLE=`sentence in backticks` testing" set:

      Executing command: "docker" "run" "-t" "-d" "-u" "0:0" "-w" "/home/jenkins/workspace/test" "-v" "/home/jenkins/workspace/test:/home/jenkins/workspace/test:rw,z" "-v" "/home/jenkins/workspace/test@tmp:/home/jenkins/workspace/test@tmp:rw,z" "-e" ******** "-e" ******** "-e" ******** "-e" ******** "-e" ******** "-e" ******** "-e" ******** "-e" ******** "-e" ******** "-e" ******** "-e" ******** "-e" ******** "-e" ******** "-e" ******** "-e" ******** "-e" ******** "-e" ******** "-e" ******** "-e" ******** "-e" ******** "-e" ******** "-e" ******** "-e" ******** "-e" ******** "-e" ******** "-e" ******** "-e" ******** "-e" ******** "-e" ******** "-e" ******** "-e" ******** "-e" ******** "-e" ******** "-e" ******** "-e" ******** "busybox" "cat" 
       exit
       /bin/sh: sentence: not found
       e227748663c1923219f9b106ed23f7332e2ffb05aa81ddc79561e86ff7b2a9b0
       Executing shell script inside container [docker] of pod [build-2g-l595l-8mq5t]
       Executing command: "docker" "top" "/bin/sh: sentence: not found
       e227748663c1923219f9b106ed23f7332e2ffb05aa81ddc79561e86ff7b2a9b0" "-eo" "pid,comm" 
       exit
       Error response from daemon: page not found

       

      Considering the container ID for running the command inside is expected to be returned from the docker run command (https://github.com/jenkinsci/docker-workflow-plugin/blob/docker-workflow-1.18/src/main/java/org/jenkinsci/plugins/docker/workflow/WithContainerStep.java#L185) this break the functionality of `docker.inside` as can be seen in the output above, since `container` will be set to `"/bin/sh: sentence: not found e227748663c1923219f9b106ed23f7332e2ffb05aa81ddc79561e86ff7b2a9b0"`.

       

      The full exception is:

      java.io.IOException: Failed to run top '/bin/sh: sentence: not found
      e227748663c1923219f9b106ed23f7332e2ffb05aa81ddc79561e86ff7b2a9b0'. Error: 
      	at org.jenkinsci.plugins.docker.workflow.client.DockerClient.listProcess(DockerClient.java:140)
      	at org.jenkinsci.plugins.docker.workflow.WithContainerStep$Execution.start(WithContainerStep.java:186)
      	at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:268)
      	at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:176)
      	at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:122)
      	at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:48)
      	at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
      	at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
      	at com.cloudbees.groovy.cps.sandbox.DefaultInvoker.methodCall(DefaultInvoker.java:20)
      	at org.jenkinsci.plugins.docker.workflow.Docker$Image.inside(Docker.groovy:134)
      	at org.jenkinsci.plugins.docker.workflow.Docker.node(Docker.groovy:66)
      	at org.jenkinsci.plugins.docker.workflow.Docker$Image.inside(Docker.groovy:122)
      	at stage_buildDocker.call(stage_buildDocker.groovy:43)
      	at ___cps.transform___(Native Method)
      	at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:57)
      	at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:109)
      	at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:82)
      	at sun.reflect.GeneratedMethodAccessor206.invoke(Unknown Source)
      	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      	at java.lang.reflect.Method.invoke(Method.java:498)
      	at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
      	at com.cloudbees.groovy.cps.impl.ClosureBlock.eval(ClosureBlock.java:46)
      	at com.cloudbees.groovy.cps.Next.step(Next.java:83)
      	at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:174)
      	at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:163)
      	at org.codehaus.groovy.runtime.GroovyCategorySupport$ThreadCategoryInfo.use(GroovyCategorySupport.java:129)
      	at org.codehaus.groovy.runtime.GroovyCategorySupport.use(GroovyCategorySupport.java:268)
      	at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:163)
      	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:18)
      	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:51)
      	at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:174)
      	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:347)
      	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$200(CpsThreadGroup.java:93)
      	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:259)
      	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:247)
      	at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:64)
      	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
      	at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:131)
      	at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
      	at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:59)
      	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
      	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
      	at java.lang.Thread.run(Thread.java:748)
      Finished: FAILURE

       

      Moreover, if `CHANGE_TITLE` contains an unclosed backtick (e.g, an odd number of backticks), it seems like the docker run command hangs until the timeout period is reached, which also results in a failed build:

      ERROR: Timeout after 180 seconds
      Executing shell script inside container [docker] of pod [build-2g-8v7gq-jnms6]
      Executing command: "docker" "top" "" "-eo" "pid,comm" 
      exit
      Error response from daemon: page not found
      
      java.io.IOException: Failed to run top ''. Error: 
      	at org.jenkinsci.plugins.docker.workflow.client.DockerClient.listProcess(DockerClient.java:140)
      	at org.jenkinsci.plugins.docker.workflow.WithContainerStep$Execution.start(WithContainerStep.java:186)
      	at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:268)
      	at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:176)
      	at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:122)
      	at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:48)
      	at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
      	at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
      	at com.cloudbees.groovy.cps.sandbox.DefaultInvoker.methodCall(DefaultInvoker.java:20)
      	at org.jenkinsci.plugins.docker.workflow.Docker$Image.inside(Docker.groovy:134)
      	at org.jenkinsci.plugins.docker.workflow.Docker.node(Docker.groovy:66)
      	at org.jenkinsci.plugins.docker.workflow.Docker$Image.inside(Docker.groovy:122)
      	at stage_buildDocker.call(stage_buildDocker.groovy:43)
      	at ___cps.transform___(Native Method)
      	at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:57)
      	at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:109)
      	at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:82)
      	at sun.reflect.GeneratedMethodAccessor206.invoke(Unknown Source)
      	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      	at java.lang.reflect.Method.invoke(Method.java:498)
      	at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
      	at com.cloudbees.groovy.cps.impl.ClosureBlock.eval(ClosureBlock.java:46)
      	at com.cloudbees.groovy.cps.Next.step(Next.java:83)
      	at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:174)
      	at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:163)
      	at org.codehaus.groovy.runtime.GroovyCategorySupport$ThreadCategoryInfo.use(GroovyCategorySupport.java:129)
      	at org.codehaus.groovy.runtime.GroovyCategorySupport.use(GroovyCategorySupport.java:268)
      	at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:163)
      	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:18)
      	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:51)
      	at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:174)
      	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:347)
      	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$200(CpsThreadGroup.java:93)
      	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:259)
      	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:247)
      	at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:64)
      	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
      	at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:131)
      	at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
      	at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:59)
      	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
      	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
      	at java.lang.Thread.run(Thread.java:748)
      Finished: FAILURE
      

      I'm sure that people more creative than myself can find ways how this can be further abused. A simple solution might be to wrap the variable in single quotes.

       

       

          [JENKINS-57452] docker.inside allows evaluation of environment variables containing backticks

          Jon Sten added a comment -

          We got hit partially by this one, i.e. the part related to capture of container id, in our case the warning "Process leaked file descriptors. See https://jenkins.io/redirect/troubleshooting/process-leaked-file-descriptors for more information" was written on the output stream of the run command, which results in unexpected output:

           

          $ docker run -t -d <<removed>> -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** <<removed>> cat
          $ docker top "6e82f2a8e6fcfe1460b561479e70b406ad3506d4ba59269f30e588bf972afa9b
          Process leaked file descriptors. See https://jenkins.io/redirect/troubleshooting/process-leaked-file-descriptors for more information" -eo pid,comm

          Why we got the Process leaked warning in the first place is a different question, but still it was written to the output stream and docker workflow plugin was unable to handle it.

          Possible solutions:

          • Use regex to find the hash, e.g. [0-9,a-f]{64}
          • Use stdout redirect on the os level to write the container id to a file, and then read the file

          None of these solutions are perfect, the first is not future proof if docker decides to change format of their hashes. The later is not great since output redirect can work different on different os, and thus requiring separate command implementations.

          Note: The root cause for the problem described in the issue description is in my opinion insufficient escaping of the command. My addition just shows that fixing the escaping is not enough as other things can be written on the stream for the docker run command.

           

          Jon Sten added a comment - We got hit partially by this one, i.e. the part related to capture of container id, in our case the warning "Process leaked file descriptors. See https://jenkins.io/redirect/troubleshooting/process-leaked-file-descriptors for more information" was written on the output stream of the run command, which results in unexpected output:   $ docker run -t -d <<removed>> -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** <<removed>> cat $ docker top "6e82f2a8e6fcfe1460b561479e70b406ad3506d4ba59269f30e588bf972afa9b Process leaked file descriptors. See https://jenkins.io/redirect/troubleshooting/process-leaked-file-descriptors for more information" -eo pid,comm Why we got the Process leaked warning in the first place is a different question, but still it was written to the output stream and docker workflow plugin was unable to handle it. Possible solutions: Use regex to find the hash, e.g. [0-9,a-f] {64} Use stdout redirect on the os level to write the container id to a file, and then read the file None of these solutions are perfect, the first is not future proof if docker decides to change format of their hashes. The later is not great since output redirect can work different on different os, and thus requiring separate command implementations. Note: The root cause for the problem described in the issue description is in my opinion insufficient escaping of the command. My addition just shows that fixing the escaping is not enough as other things can be written on the stream for the docker run command.  

          Kevin added a comment - - edited

          We just stumbled upon this issue, and I'm surprised it's only considered Minor? That can cause quite some havoc, what if a PR title contains "Do not call `<dangerous cmd>`" and this is simply executed on a random agent?

          Kevin added a comment - - edited We just stumbled upon this issue, and I'm surprised it's only considered Minor? That can cause quite some havoc, what if a PR title contains "Do not call `<dangerous cmd>`" and this is simply executed on a random agent?

            Unassigned Unassigned
            eyalzek Eyal Zekaria
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

              Created:
              Updated: