Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-55611

Make EXECUTOR_NUMBER available for yaml definition

    • Icon: Improvement Improvement
    • Resolution: Unresolved
    • Icon: Minor Minor
    • kubernetes-plugin
    • None

      We have a use case that we would like to attach persistent volume to the pod during the build (to store cache data). To support concurrent build, we would like to assign volume claim based on the EXECUTOR_NUMBER, so that different concurrent build can use different volume.

      Currently, this is impossible since EXECUTOR_NUMBER is not available in the agent definition.

          [JENKINS-55611] Make EXECUTOR_NUMBER available for yaml definition

          what are you calling EXECUTOR_NUMBER ? have you seen this somewhere?

          why do you say "available for yaml definition" when the yaml already supports everything kubernetes does?

          Carlos Sanchez added a comment - what are you calling EXECUTOR_NUMBER ? have you seen this somewhere? why do you say "available for yaml definition" when the yaml already supports everything kubernetes does?

          Drum Big added a comment -

          I'm talking about the environment variable "EXECUTOR_NUMBER" that jenkins defined (see https://wiki.jenkins.io/display/JENKINS/Building+a+software+project)

          Basically, we would like to assign the pod with a PVC with name "workspace-cache-${env.JOB_NAME}-${env.EXECUTOR_NUMBER}", this name will be unique across all concurrent build. But it is still reusable for builds that are not overlapping in theirs durations.

          Does that make sense?

          Drum Big added a comment - I'm talking about the environment variable "EXECUTOR_NUMBER" that jenkins defined (see https://wiki.jenkins.io/display/JENKINS/Building+a+software+project) Basically, we would like to assign the pod with a PVC with name "workspace-cache-${env.JOB_NAME}-${env.EXECUTOR_NUMBER}", this name will be unique across all concurrent build. But it is still reusable for builds that are not overlapping in theirs durations. Does that make sense?

          I don't think this is possible, wdyt abayer ?
          Looks like EXECUTOR_NUMBER is only available inside node so not available when defining the pod template

          def label = "maven-${UUID.randomUUID().toString()}"
          
          podTemplate(label: label, yaml: """
          spec:
            containers:
            - name: maven
              image: maven:alpine
              command:
              - cat
              tty: true
              env:
              - name: CONTAINER_ENV_VAR
                value: x${env.JOB_NAME}-${env.EXECUTOR_NUMBER}x
          """
            ) {
          
            echo "${env.JOB_NAME}-${env.EXECUTOR_NUMBER}"
          
            node(label) {
              stage('test') {
                  container('maven') {
                      echo "${env.JOB_NAME}-${env.EXECUTOR_NUMBER}"
                      sh 'echo $CONTAINER_ENV_VAR'
                  }
              }
            }
          }
          
          Started by user admin
          Running in Durability level: MAX_SURVIVABILITY
          [Pipeline] podTemplate
          [Pipeline] {
          [Pipeline] echo
          test/executor-number-null
          [Pipeline] node
          Still waiting to schedule task
          ‘Jenkins’ doesn’t have label ‘maven-aeef475c-5a8d-42e0-bbf2-a5b20b3a02e3’
          Agent maven-aeef475c-5a8d-42e0-bbf2-a5b20b3a02e3-22x43-974rz is provisioned from template Kubernetes Pod Template
          Agent specification [Kubernetes Pod Template] (maven-aeef475c-5a8d-42e0-bbf2-a5b20b3a02e3): 
          yaml:
          
          spec:
            containers:
            - name: maven
              image: maven:alpine
              command:
              - cat
              tty: true
              env:
              - name: CONTAINER_ENV_VAR
                value: xtest/executor-number-nullx
          
          
          Running on maven-aeef475c-5a8d-42e0-bbf2-a5b20b3a02e3-22x43-974rz in /home/jenkins/workspace/test/executor-number
          [Pipeline] {
          [Pipeline] stage
          [Pipeline] { (test)
          [Pipeline] container
          [Pipeline] {
          [Pipeline] echo
          test/executor-number-0
          [Pipeline] sh
          + echo xtest/executor-number-nullx
          xtest/executor-number-nullx
          [Pipeline] }
          [Pipeline] // container
          [Pipeline] }
          [Pipeline] // stage
          [Pipeline] }
          [Pipeline] // node
          [Pipeline] }
          [Pipeline] // podTemplate
          [Pipeline] End of Pipeline
          Finished: SUCCESS
          

          Carlos Sanchez added a comment - I don't think this is possible, wdyt abayer ? Looks like EXECUTOR_NUMBER is only available inside node so not available when defining the pod template def label = "maven-${UUID.randomUUID().toString()}" podTemplate(label: label, yaml: """ spec: containers: - name: maven image: maven:alpine command: - cat tty: true env: - name: CONTAINER_ENV_VAR value: x${env.JOB_NAME}-${env.EXECUTOR_NUMBER}x """ ) { echo "${env.JOB_NAME}-${env.EXECUTOR_NUMBER}" node(label) { stage( 'test' ) { container( 'maven' ) { echo "${env.JOB_NAME}-${env.EXECUTOR_NUMBER}" sh 'echo $CONTAINER_ENV_VAR' } } } } Started by user admin Running in Durability level: MAX_SURVIVABILITY [Pipeline] podTemplate [Pipeline] { [Pipeline] echo test/executor-number- null [Pipeline] node Still waiting to schedule task ‘Jenkins’ doesn’t have label ‘maven-aeef475c-5a8d-42e0-bbf2-a5b20b3a02e3’ Agent maven-aeef475c-5a8d-42e0-bbf2-a5b20b3a02e3-22x43-974rz is provisioned from template Kubernetes Pod Template Agent specification [Kubernetes Pod Template] (maven-aeef475c-5a8d-42e0-bbf2-a5b20b3a02e3): yaml: spec: containers: - name: maven image: maven:alpine command: - cat tty: true env: - name: CONTAINER_ENV_VAR value: xtest/executor-number-nullx Running on maven-aeef475c-5a8d-42e0-bbf2-a5b20b3a02e3-22x43-974rz in /home/jenkins/workspace/test/executor-number [Pipeline] { [Pipeline] stage [Pipeline] { (test) [Pipeline] container [Pipeline] { [Pipeline] echo test/executor-number-0 [Pipeline] sh + echo xtest/executor-number-nullx xtest/executor-number-nullx [Pipeline] } [Pipeline] // container [Pipeline] } [Pipeline] // stage [Pipeline] } [Pipeline] // node [Pipeline] } [Pipeline] // podTemplate [Pipeline] End of Pipeline Finished: SUCCESS

          Andrew Bayer added a comment -

          csanchez - correct, we don't have the executor number until we're on the executor, so...

          Andrew Bayer added a comment - csanchez - correct, we don't have the executor number until we're on the executor, so...

          Drum Big added a comment -

          hmm. Is it possible for the kubernetes plugin to allocate and provide some variable like this by itself? (Some kind of unique id that does not collide with other concurrent run, but is reused once the job is finished).

          Drum Big added a comment - hmm. Is it possible for the kubernetes plugin to allocate and provide some variable like this by itself? (Some kind of unique id that does not collide with other concurrent run, but is reused once the job is finished).

          Hy Drum Big,

          we have exactky the same challenge.

          We're using the Jekins k8s Plugin, and have some big git repos. Therefore we cannot clone the repos in case of perfomance (it takes  4min).

          So with a traditional Jenkins Infrastructure, there's a node, Jenkins sets up a unique directory for each EXECUTOR. As with kubernetes there's nothing like an executor for a node, while the so known nodes in kubernetes are only temporay spawned for one build.

          As a result, there's no unique directory created by jenkins in the kubernetes world, therefore we need a possibility to map a persistent volume like the unique directory setup by jenkins. Otherwise we always have to clone the git repo (okay there's some performace boosts possible:  e.g: git clone -b <branch> <url> --depth=1, but that's not a great solution comparing to an update on a given repo).

          Maybe there a possiblity to get the information of the unique jenkins path in order to use it as an ID for a unique persistence volume. 

          Rainer Segebrecht added a comment - Hy Drum Big, we have exactky the same challenge. We're using the Jekins k8s Plugin, and have some big git repos. Therefore we cannot clone the repos in case of perfomance (it takes  4min). So with a traditional Jenkins Infrastructure, there's a node, Jenkins sets up a unique directory for each EXECUTOR. As with kubernetes there's nothing like an executor for a node, while the so known nodes in kubernetes are only temporay spawned for one build. As a result, there's no unique directory created by jenkins in the kubernetes world, therefore we need a possibility to map a persistent volume like the unique directory setup by jenkins. Otherwise we always have to clone the git repo (okay there's some performace boosts possible:  e.g:  git clone -b <branch> <url> --depth=1, but that's not a great solution comparing to an update on a given repo). Maybe there a possiblity to get the information of the unique jenkins path in order to use it as an ID for a unique persistence volume. 

          Jesse Glick added a comment -

          Asking to bind EXECUTOR_NUMBER is proposing a solution (or really a workaround). To take a step back, the problem is the lack of a supported cache system. A persistent volume claim would presumably be required in this context, but it might be something that Jenkins manages for you; and depending on the volume type and build technology used, it may or may not offer concurrent access, cross-(K8s) node access, etc.

          Jesse Glick added a comment - Asking to bind EXECUTOR_NUMBER is proposing a solution (or really a workaround). To take a step back, the problem is the lack of a supported cache system. A persistent volume claim would presumably be required in this context, but it might be something that Jenkins manages for you; and depending on the volume type and build technology used, it may or may not offer concurrent access, cross-(K8s) node access, etc.

            Unassigned Unassigned
            bigdrum Drum Big
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

              Created:
              Updated: