-
Bug
-
Resolution: Unresolved
-
Major
-
None
-
Jenkins ver. 2.176.3
Kubernets-plugin: 1.19.3
-
-
kubernetes 1.27.1
I create a lot of jobs 300 of them using jobdsl.
I have set the Kubernetes-plugin Concurrency Limit set to 20
Then Kubernetes plugin spins up a new node/pod for almost every job. All but a few get in pending state due to resource limit in my Kubernetes cluster, the pending pods is removed after a while and then recreated.
Sometimes the concurrency Limit is respected, i see a lot of this in my jenkins log. but it should never get to to 184 running or pending.
INFO: Maximum number of concurrently running agent pods (20) reached for Kubernetes Cloud kubernetes, not provisioning: 184 running or pending in namespace jenkins with Kubernetes labels {jenkins=slave}
- causes
-
JENKINS-63705 Concurrency limit not calculated over template anymore in 1.27.1
-
- Resolved
-
- relates to
-
JENKINS-38260 Kubernetes plugin does not respect Container Cap
-
- Resolved
-
[JENKINS-59959] The Concurrency Limit is not always respected.
Link |
New:
This issue relates to |
Description |
Original:
I create a lot of jobs 300 of them using jobdsl, lot of jobs in a short time. I have set the Kubernetes-plugin Concurrency Limit set to 20 Then Kubernetes plugin spins up a new node for almost every job. All but a few get in pending state due to resource limit in my Kubernetes cluster. Sometimes the concurrency Limit is respected, i see a lot of this in my jenkins log. but it should never get to to 184 running or pending. {code:java} INFO: Maximum number of concurrently running agent pods (20) reached for Kubernetes Cloud kubernetes, not provisioning: 184 running or pending in namespace jenkins with Kubernetes labels {jenkins=slave} {code} |
New:
I create a lot of jobs 300 of them using jobdsl. I have set the Kubernetes-plugin Concurrency Limit set to 20 Then Kubernetes plugin spins up a new node/pod for almost every job. All but a few get in pending state due to resource limit in my Kubernetes cluster. Sometimes the concurrency Limit is respected, i see a lot of this in my jenkins log. but it should never get to to 184 running or pending. {code:java} INFO: Maximum number of concurrently running agent pods (20) reached for Kubernetes Cloud kubernetes, not provisioning: 184 running or pending in namespace jenkins with Kubernetes labels {jenkins=slave} {code} |
Attachment | New: image-2019-10-28-11-48-19-811.png [ 49327 ] |
Description |
Original:
I create a lot of jobs 300 of them using jobdsl. I have set the Kubernetes-plugin Concurrency Limit set to 20 Then Kubernetes plugin spins up a new node/pod for almost every job. All but a few get in pending state due to resource limit in my Kubernetes cluster. Sometimes the concurrency Limit is respected, i see a lot of this in my jenkins log. but it should never get to to 184 running or pending. {code:java} INFO: Maximum number of concurrently running agent pods (20) reached for Kubernetes Cloud kubernetes, not provisioning: 184 running or pending in namespace jenkins with Kubernetes labels {jenkins=slave} {code} |
New:
I create a lot of jobs 300 of them using jobdsl. I have set the Kubernetes-plugin Concurrency Limit set to 20 Then Kubernetes plugin spins up a new node/pod for almost every job. All but a few get in pending state due to resource limit in my Kubernetes cluster, the pending pods is removed after a while and then recreated. Sometimes the concurrency Limit is respected, i see a lot of this in my jenkins log. but it should never get to to 184 running or pending. {code:java} INFO: Maximum number of concurrently running agent pods (20) reached for Kubernetes Cloud kubernetes, not provisioning: 184 running or pending in namespace jenkins with Kubernetes labels {jenkins=slave} {code} |
From what I can tell, it has to do with how getActiveSlavePods relies on Kubernetes API to tell it what is currently running. There will always be an inherent delay between launch and this showing pods in the API so this approach will likely never be stable. IMO, a singleton that manages the instantiation of the slaves and their states would be the more appropriate solution. I plan to take a look at this next week as it's killing our Jenkins DR automation when our multibranch projects queue up 2k jobs on the first run after scanning.