-
Bug
-
Resolution: Unresolved
-
Critical
-
None
-
Jenkins v2.7.2
Plugin v1.10
Windows 10 (also seen on Debian 7.11)
Windows 10 JNLP Agents
java version "1.7.0_101"
OpenJDK Runtime Environment (IcedTea 2.6.6) (7u101-2.6.6-2~deb7u1)
OpenJDK 64-Bit Server VM (build 24.95-b01, mixed mode)
and
java version "1.8.0_92"
Java(TM) SE Runtime Environment (build 1.8.0_92-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.92-b14, mixed mode)
Jenkins v2.7.2 Plugin v1.10 Windows 10 (also seen on Debian 7.11) Windows 10 JNLP Agents java version "1.7.0_101" OpenJDK Runtime Environment (IcedTea 2.6.6) (7u101-2.6.6-2~deb7u1) OpenJDK 64-Bit Server VM (build 24.95-b01, mixed mode) and java version "1.8.0_92" Java(TM) SE Runtime Environment (build 1.8.0_92-b14) Java HotSpot(TM) 64-Bit Server VM (build 25.92-b14, mixed mode)
We have been seeing our Jenkins instance slowly increase in memory usage and not decrease to the point where it grinds to a halt and can't process any requests. The only way to recover is to restart the server. It has been noticed that agents going offline and then their setup script failing makes the problem a lot worse a lot quicker.
I have reproduced the issue on a clean install of Jenkins by installing the plugin, creating 20 new agents, and setting the set-up script to the following:
#!cmd.exe /c "exit /b -1"
I took a heap dump just after start-up and then another after half an hour of it doing nothing but have 20 agents repeatedly connect and fail setup. Comparing the two showed almost half a million StackTraceElement objects, most of which were being help by Channel objects that couldn't be GCed (of which there were over 3000). If left long enough, the server will stop processing connections and stop responding to anything.