-
Bug
-
Resolution: Fixed
-
Critical
-
None
-
Jenkins: 2.346.3
OS: Linux - 5.4.0-58-generic
Since the update from 2.346.2 to 2.346.3, multiple builds running in parallel (either concurrent builds from the same pipeline or multiple builds from different pipelines) cause memory spikes that exceed the heap size of the JVM by a lot which leads to kubernetes killing the jenkins pod (OOMkilled).
Setup:
Kubernetes container with memory request+limit = 8gb
Jenkins -Xmx=4g
Steps to reproduce:
- Upgrade to 2.346.3
- Run multiple builds in parallel (they should take some minutes)
- In the jenkins pod, you can see that the memory of the jenkins process spikes within a couple of seconds
- As soon as the memory exceeds the memory limit of the container, kubernetes kills the pod. We tested the same scenario with a memory limit of 32gb and -Xmx=4g which leads to the issue taking longer to occur.
Downgrading to 2.346.2 solved the issue.
Could it be a plugin?
The exact same set of plugins works with 2.346.2.
This bug report is light on steps to reproduce or details. Which JVM is running out of memory, the controller JVM or an agent JVM? If the controller JVM, does the same issue persist with the 2.346.2 Docker image but the 2.346.3 jenkins.war file (that would tell us whether the regression is in the Java code or in the environment delivered in the Docker image such as OS version, Java version, etc). And finally have you done any analysis of the heap dump to report what is using up the heap?