Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-66337

Kubernetes Plugin IllegalStateException: Not expecting pod template to be null at this point. On master restart for long lasting slave.

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Critical Critical
    • kubernetes-plugin
    • None
    • Jenkins 2.305
      kubernetes-plugin: 1.30.1
      slave-agent: 4.10

      We are in the process of upgrading our Jenkins instance and plugins version.

       

      We have encountered this bug for long-lasting pods (ex: idleMinutes: "30").

      If there is no slave matching the agent label. Everything is running properly. 

      • A new agent pod is provided 
      • Closure is properly run on new agent pod.

      Please note that the agent will properly run all pipeline that will be provided from this point.

      • Jenkins properly find the already running agent (Yeah!)
      • Closure is properly executed.

       

      If we restart Jenkins (master), while the agent was already provided (matching label):

      • Jenkins properly find the already running agent (Yeah!)
      • PodTemplate is getting the exception: 
        java.lang.IllegalStateException: Not expecting pod template to be null at this point

       

      Here is a pipeline example :

       

      def slavePodForYou(steps, body){
          String image = 'jenkins/jnlp-slave:latest'
          
          def kubeLabel = "slavenode"
          steps.podTemplate(name: "slavenode", label: kubeLabel, namespace: "default",
                     containers: [
                         steps.containerTemplate(
                          name: "jnlp",
                          image: image,
                          ttyEnabled: false,
                          args: '${computer.jnlpmac} ${computer.name}',
                          resourceRequestCpu: '100m',
                          resourceRequestMemory: '250Mi'
                          )
                     ],
                     yaml: """
                             metadata:
                               labels:
                                 job: jenkins
                           """.stripIndent(),
                     showRawYaml: false,
                     idleMinutes: "30",
                 ){
                     body.call(kubeLabel)
                 }   
      }
      slavePodForYou(this){ kubeLabel ->
          node(kubeLabel){
              sh "echo 'chicken'"
          }
      }
      

       

       

      Note: Everything was properly working with version: 1.25.1 of the Kubernetes plugin.

       

      Note also: jnlp agent is properly rejoining the master, after the restart.

      Note also: we are not using JCasc. All our pod definition are done in a shared library. Only the cloud cluster definition is done in the UI.

       

          [JENKINS-66337] Kubernetes Plugin IllegalStateException: Not expecting pod template to be null at this point. On master restart for long lasting slave.

          Pascal Laporte created issue -
          Pascal Laporte made changes -
          Description Original: We are in the process of upgrading our Jenkins instance and plugins version.

           

          We have encountered this bug for long-lasting pods (ex: idleMinutes: "30").

          If there is no slave matching the agent label. Everything is running properly. 
           * A new agent pod is provided 
           * Closure is properly run on new agent pod.

          Please note that the agent will properly run all pipeline that will be provided 
           * Jenkins properly find the already running agent (Yeah!)
           * Closure is properly executed.

           

          If we restart Jenkins (master), while the agent was already provided (matching label):
           * Jenkins properly find the already running agent (Yeah!)
           * PodTemplate is getting the exception: 
          {code:java}
          java.lang.IllegalStateException: Not expecting pod template to be null at this point{code}

           

          Here is a pipeline example :

           
          {code:java}
          def slavePodForYou(steps, body){
              String image = 'jenkins/jnlp-slave:latest'
              
              def kubeLabel = "slavenode"
              steps.podTemplate(name: "slavenode", label: kubeLabel, namespace: "default",
                         containers: [
                             steps.containerTemplate(
                              name: "jnlp",
                              image: image,
                              ttyEnabled: false,
                              args: '${computer.jnlpmac} ${computer.name}',
                              resourceRequestCpu: '100m',
                              resourceRequestMemory: '250Mi'
                              )
                         ],
                         yaml: """
                                 metadata:
                                   labels:
                                     job: jenkins
                               """.stripIndent(),
                         showRawYaml: false,
                         idleMinutes: "30",
                     ){
                         body.call(kubeLabel)
                     }
          }
          slavePodForYou(this){ kubeLabel ->
              node(kubeLabel){
                  sh "echo 'chicken'"
              }
          }
          {code}
           

           

          Note: Everything was properly working with version: 1.25.1 of the Kubernetes plugin.

           

          Note also: jnlp agent is properly rejoining the master, after the restart.

          Note also: we are not using JCasc. All our pod definition are done in a shared library. Only the cloud cluster definition is done in the UI.

           
          New: We are in the process of upgrading our Jenkins instance and plugins version.

           

          We have encountered this bug for long-lasting pods (ex: idleMinutes: "30").

          If there is no slave matching the agent label. Everything is running properly. 
           * A new agent pod is provided 
           * Closure is properly run on new agent pod.

          Please note that the agent will properly run all pipeline that will be provided from this point.
           * Jenkins properly find the already running agent (Yeah!)
           * Closure is properly executed.

           

          *If we restart Jenkins* (master), while the agent was already provided (matching label):
           * Jenkins properly find the already running agent (Yeah!)
           * PodTemplate is getting the exception: 
          {code:java}
          java.lang.IllegalStateException: Not expecting pod template to be null at this point{code}

           

          Here is a pipeline example :

           
          {code:java}
          def slavePodForYou(steps, body){
              String image = 'jenkins/jnlp-slave:latest'
              
              def kubeLabel = "slavenode"
              steps.podTemplate(name: "slavenode", label: kubeLabel, namespace: "default",
                         containers: [
                             steps.containerTemplate(
                              name: "jnlp",
                              image: image,
                              ttyEnabled: false,
                              args: '${computer.jnlpmac} ${computer.name}',
                              resourceRequestCpu: '100m',
                              resourceRequestMemory: '250Mi'
                              )
                         ],
                         yaml: """
                                 metadata:
                                   labels:
                                     job: jenkins
                               """.stripIndent(),
                         showRawYaml: false,
                         idleMinutes: "30",
                     ){
                         body.call(kubeLabel)
                     }
          }
          slavePodForYou(this){ kubeLabel ->
              node(kubeLabel){
                  sh "echo 'chicken'"
              }
          }
          {code}
           

           

          Note: Everything was properly working with version: 1.25.1 of the Kubernetes plugin.

           

          Note also: jnlp agent is properly rejoining the master, after the restart.

          Note also: we are not using JCasc. All our pod definition are done in a shared library. Only the cloud cluster definition is done in the UI.

           

            Unassigned Unassigned
            pascallap Pascal Laporte
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

              Created:
              Updated: