Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-37087

Label expressions and multiple labels per pod aren't handled properly

    • Icon: Bug Bug
    • Resolution: Fixed
    • Icon: Major Major
    • kubernetes-plugin
    • Operating System: RHEL 6
      Java version: "1.8.0_45"
      Jenkins version: Reproducible on both, 1.651.2 and 2.7.1

      Jenkins allows jobs to have label expressions of the sort:
      (label1 || label2) && !(label3)

      If the label expression is satisfied by any of the pod templates inside any of the kubernetes clouds, the function provision() in KubernetesCloud.java thinks that is has received a single label, instead of a label expression. When addProvisionedSlave() tries to get count of all running containers with the given label, the Kubernetes API throws the following backtrace and the job gets stuck in the queue:

      WARNING: Failed to count the # of live instances on Kubernetes
      io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://kubernetes.mydomain.com/api/v1/namespaces/infra-build/pods?labelSelector=name%3Djenki$s-(label1||label2)%26%26!(label3). Message: unable to parse requirement: invalid label value: must have at most 63 characters, matching regex (([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?$ e.g. "MyValue" or "". Received status: Status(apiVersion=v1, code=500, details=null, kind=Status, message=unable to parse requirement: invalid label value: must have at most 63 characters$ matching regex (([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?: e.g. "MyValue" or "", metadata=ListMeta(resourceVersion=null, selfLink=null, additionalProperties={}), reason=null, status=Fail$re, additionalProperties={}).
              at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:310)
              at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:263)
              at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:232)
              at io.fabric8.kubernetes.client.dsl.base.BaseOperation.list(BaseOperation.java:416)
              at io.fabric8.kubernetes.client.dsl.base.BaseOperation.list(BaseOperation.java:58)
              at org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud.addProvisionedSlave(KubernetesCloud.java:477)
              at org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud.provision(KubernetesCloud.java:357)
              at hudson.slaves.NodeProvisioner$StandardStrategyImpl.apply(NodeProvisioner.java:700)
              at hudson.slaves.NodeProvisioner.update(NodeProvisioner.java:305)
              at hudson.slaves.NodeProvisioner.access$000(NodeProvisioner.java:58)
              at hudson.slaves.NodeProvisioner$NodeProvisionerInvoker.doRun(NodeProvisioner.java:797)
              at hudson.triggers.SafeTimerTask.run(SafeTimerTask.java:50)
              at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
              at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
              at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
              at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
              at java.lang.Thread.run(Thread.java:745)
      

      A quick way to fix this is might be to launch containers with the combined labels defined in the pod template. For example, if the pod template has the labels: label1 and label2, then we could spawn the container with the label: name=jenkins-label1-label2 or something similar which satisfies the regex required by Kubernetes API.

      In the current code, if one pod template has more than one label, then the container cap check for a particular template inside addProvisionedSlave() is wrong, since it checks only for the given label and not all possible labels for that particular pod.

      Also, if more than one pod templates satisfy the given label expression, then all satisfying pod templates should be tried, instead of only the first, since one of them might have reached the container cap, but other might not have.

          [JENKINS-37087] Label expressions and multiple labels per pod aren't handled properly

          Seems that

          Carlos Sanchez added a comment - Seems that pod templates need to support multiple labels jenkins should add all pod labels as k8s labels, but given there is no support for multiple labels with same key, I'd add the jenkins labels as jenkins/LABEL=true , ie. jenkins/java=true and jenkins/golang=true checking how many pods are already running can be done similar to the docker plugin https://github.com/jenkinsci/docker-plugin/blob/master/docker-plugin/src/main/java/com/nirima/jenkins/plugins/docker/DockerCloud.java#L570

          Nehal J Wani added a comment - - edited

          Let's consider this hypothetical example:

          Pod Template 1 supports:

          customtag1 customtag2 customtag3

          Pod Template 2 supports:

          customtag3 customtag4 customtag5

          And I have a job, with the label expression:

          (customtag1 && customtag3) || customtag6

          In such a scenario, what label should be passed to the kubernetes cluster while creating the pod?

          Nehal J Wani added a comment - - edited Let's consider this hypothetical example: Pod Template 1 supports: customtag1 customtag2 customtag3 Pod Template 2 supports: customtag3 customtag4 customtag5 And I have a job, with the label expression: (customtag1 && customtag3) || customtag6 In such a scenario, what label should be passed to the kubernetes cluster while creating the pod?

          First, you need to pick a template, which I believe the docker plugin just picks the first that matches, in this case will pick Pod Template 1.
          Then the pod is started based on the template, so the pod labels will be customtag1 customtag2 customtag3

          Carlos Sanchez added a comment - First, you need to pick a template, which I believe the docker plugin just picks the first that matches, in this case will pick Pod Template 1. Then the pod is started based on the template, so the pod labels will be customtag1 customtag2 customtag3

          Carlos Sanchez added a comment - PR at https://github.com/jenkinsci/kubernetes-plugin/pull/69

          Bruno Bieth added a comment -

          Is this still open? I took a glance at the PR and couldn't find any test, am I missing something?

          Bruno Bieth added a comment - Is this still open? I took a glance at the PR and couldn't find any test, am I missing something?

          Nehal J Wani added a comment -

          Nehal J Wani added a comment - Seems to be fixed by https://github.com/jenkinsci/kubernetes-plugin/pull/69

            nehaljwani Nehal J Wani
            nehaljwani Nehal J Wani
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

              Created:
              Updated:
              Resolved: