Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-64047

lock beforeAgent of main pipeline agent

XMLWordPrintable

      Hello,
       
      My Jenkins is running in a Kubernetes cluster and all pipelines run in pod.
      I'd like to limit the number of pipelines running in parallel: I'd like an executor limitation like we have when we were running all pipelines in master.
       
      I'm using lockable resources plugin.
      My expecting implementation was something like:
       

      pipeline {
          agent {
              kubernetes {
                  label 'test-lock'
                  yaml libraryResource('my-pod.yaml')
              }
          }
          options {
              lock(label: 'forge-executor', quantity: 1, variable: 'forgeExecutor')
          }
      
          stages {
              stage('echo') {
                  steps {
                      echo "OK"
                      script {
                          def outcome = input message: 'Please select', parameters: [
                              [name: 'myChoice', description: 'My choice', choices: 'Choice 1\nChoice 2\nChoice 3', $class: 'ChoiceParameterDefinition']
                          ]
                      }
                  }
              }
          }
      }
      

       
      OK, the pipeline doesn't execute any stages if it cannot lock, but Kubernetes resources are used.
       
      I try to add when block with beforeAgent option in, but when block is only allowed at stage level.
      So I found a workaround: don't set any main agent and have one top stage with when condition:

      pipeline {
          agent none
          stages {
              stage('lock') {
                  agent {
                      kubernetes {
                          label 'test-lock'
                          yaml libraryResource('my-pod.yaml')
                      }
                  }
                  when {
                      beforeAgent true
                      equals(expected: true, actual:true)
                  }
                  options {
                      lock(label: 'forge-executor', quantity: 1, variable: 'forgeExecutor')
                  }
                  stages {
                      stage('echo') {
                          steps {
                              echo "OK"
                              script {
                                  def outcome = input message: 'Please select', parameters: [
                                      [name: 'myChoice', description: 'My choice', choices: 'Choice 1\nChoice 2\nChoice 3', $class: 'ChoiceParameterDefinition']
                                  ]
                              }
                          }
                      }
                  }
              }
          }
      }
      

      Or more simple

       

      pipeline {
          agent none
          options {
              lock(label: 'forge-executor', quantity: 1, variable: 'forgeExecutor')
          }
          stages {
              stage('lock') {
                  agent {
                      kubernetes {
                          label 'test-lock'
                          yaml libraryResource('my-pod.yaml')
                      }
                  }
                  stages {
                      stage('echo') {
                          steps {
                              echo "OK"
                              script {
                                  def outcome = input message: 'Please select', parameters: [
                                      [name: 'myChoice', description: 'My choice', choices: 'Choice 1\nChoice 2\nChoice 3', $class: 'ChoiceParameterDefinition']
                                  ]
                              }
                          }
                      }
                  }
              }
          }
      }

      IMO, the pipeline should not take any resources if it can lock expected resource and should stay in queue.
       
      Regards,
       
      Arnaud

       

            tgr Tobias Gruetzmacher
            arnaud Arnaud Bourree
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

              Created:
              Updated:
              Resolved: