-
Bug
-
Resolution: Fixed
-
Major
-
None
-
- Jenkins 2.289.1
- k8s v1.18.3 deployment with jenkins helm-chart 3.3.22
After about a week of uptime our Jenkins instance stops working with java.lang.OutOfMemoryError: unable to create native thread: possibly out of memory or process/resource limits reached.
After installing the monitoring plugin and looking at the threads regularly we see instances of RxNewThreadScheduler-X accumulating in TIMED_WAITING. Those seem to be the cause of the problem.
Unfortunately I cannot pinpoint what component or plugin is causing those threads to spawn. The instance is exclusively running pipelines that spawn kubernetes nodes via the kubernetes plugin.
RxNewThreadScheduler-19 "RxNewThreadScheduler-19" Id=2990 Group=main TIMED_WAITING on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@684980b7 at java.base@11.0.11/jdk.internal.misc.Unsafe.park(Native Method) - waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@684980b7 at java.base@11.0.11/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234) at java.base@11.0.11/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2123) at java.base@11.0.11/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1182) at java.base@11.0.11/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:899) at java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054) at java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114) at java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base@11.0.11/java.lang.Thread.run(Thread.java:829)
A list of currently installed plugins is attached.
- is caused by
-
JENKINS-65622 Thread leak after upgrade to 3.0.0
- Closed