Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-38260

Kubernetes plugin does not respect Container Cap

    • Icon: Bug Bug
    • Resolution: Duplicate
    • Icon: Major Major
    • kubernetes-plugin
    • None
    • Jenkins 2.7.1
      Kubernetes plugin 0.8

      The Kubernetes plugin frequently creates more concurrent slaves than the number set in the "Container Cap" field.
      Even doesn't respect the Max number of instances in the Pod Template field.

          [JENKINS-38260] Kubernetes plugin does not respect Container Cap

          Jonathan Rogers created issue -
          Albert V made changes -
          Description Original: The Kubernetes plugin frequently creates more concurrent slaves than the number set in the "Container Cap" field. New: The Kubernetes plugin frequently creates more concurrent slaves than the number set in the "Container Cap" field.
          Even doesn't respect the Max number of instances in the Pod Template field.
          Albert V made changes -
          Priority Original: Minor [ 4 ] New: Major [ 3 ]

          BTW, I have switched to Kubernetes CI Plugin. That plugin requires a bit more knowledge about Kubernetes but is more flexible and behaves more correctly in my experience. Specifically, it better respects configured concurrent container count.

          Jonathan Rogers added a comment - BTW, I have switched to Kubernetes CI Plugin . That plugin requires a bit more knowledge about Kubernetes but is more flexible and behaves more correctly in my experience. Specifically, it better respects configured concurrent container count.

          without more details (logs) I can't really figure out what you mean. The instance cap is honored in multiple examples

          Carlos Sanchez added a comment - without more details (logs) I can't really figure out what you mean. The instance cap is honored in multiple examples
          Tomasz Bienkowski made changes -
          Attachment New: jenkins-master.log [ 36579 ]

          Hello, I have the same problem, although in my case it's version

          • 2.32.1 of Jenkins,
          • 0.10 of the plugin

          I have attached the log from the Jenkins master instance. 

          Thanks in advance for any help.

          jenkins-master.log

          Tomasz Bienkowski added a comment - Hello, I have the same problem, although in my case it's version 2.32.1 of Jenkins, 0.10 of the plugin I have attached the log from the Jenkins master instance.  Thanks in advance for any help. jenkins-master.log

          I don't know how the fabricat8's Kubernetes client works, but looking at the source

          code of "addProvisionedSlave" method here:

          https://github.com/jenkinsci/kubernetes-plugin/blob/kubernetes-0.10/src/main/java/org/csanchez/jenkins/plugins/kubernetes/KubernetesCloud.java

          I was wandering if this could be a racing condition. Perhaps fabricat8's client does

          not return any pod until it is actually ready (started). If so, this might explain why 

          the Container Capacity is not respected. The sequence of events could be like this:

          1. Jenkins wants to create a slave pod. Container capacity is not exceeded.
          2. The pod is being deployed to Kubernetes (it is starting).
          3. Jenkins wants to create another pod, it asks the fabricat8's client if there are any slave pods running.
          4. fabricat8's client responds that there are no pods (because the pod from step (2) is still being deployed (it is not running yet)).
          5. Jenkins creates another pod, effectively exceeding Container Capacity setting.

          Is this reasonable?

          Tomasz Bienkowski added a comment - I don't know how the fabricat8's Kubernetes client works, but looking at the source code of "addProvisionedSlave" method here: https://github.com/jenkinsci/kubernetes-plugin/blob/kubernetes-0.10/src/main/java/org/csanchez/jenkins/plugins/kubernetes/KubernetesCloud.java I was wandering if this could be a racing condition. Perhaps fabricat8's client does not return any pod until it is actually ready (started). If so, this might explain why  the Container Capacity is not respected. The sequence of events could be like this: Jenkins wants to create a slave pod. Container capacity is not exceeded. The pod is being deployed to Kubernetes (it is starting). Jenkins wants to create another pod, it asks the fabricat8's client if there are any slave pods running. fabricat8's client responds that there are no pods (because the pod from step (2) is still being deployed (it is not running yet)). Jenkins creates another pod, effectively exceeding Container Capacity setting. Is this reasonable?

          ok, so there may be some times when the container cap is not honored when multiple pods are started ant the same time

          Carlos Sanchez added a comment - ok, so there may be some times when the container cap is not honored when multiple pods are started ant the same time

          Unfortunately this can easily lead to exhaustion of the available hardware resources on the Kubernetes cluster.

          I have experienced Kubernetes cluster being destabilized because of this as the pods allocated to a physical

          node start to consume more memory than is physically available.

          Tomasz Bienkowski added a comment - Unfortunately this can easily lead to exhaustion of the available hardware resources on the Kubernetes cluster. I have experienced Kubernetes cluster being destabilized because of this as the pods allocated to a physical node start to consume more memory than is physically available.

            csanchez Carlos Sanchez
            jrogers Jonathan Rogers
            Votes:
            9 Vote for this issue
            Watchers:
            11 Start watching this issue

              Created:
              Updated:
              Resolved: