-
Bug
-
Resolution: Unresolved
-
Minor
-
None
svanoort has most of the details out of band.
Using the latest Jenkins LTS 2.121.1 docker image.
Start with the managed memory constrained (i.e. -Xmx256m -XX::MaxMetaspaceSize=128m or similar.
Run Freestyle builds that shell-step sleep 5 and no leak
Run Pipeline builds that {{node
{ sh 'sleep 5' }}} and the RSS will grow without bound.
e.g.
Mon Jun 11 10:35:25 UTC 2018 RSS=715MB Java=544MB Mon Jun 11 10:36:25 UTC 2018 RSS=718MB Java=556MB Mon Jun 11 10:37:25 UTC 2018 RSS=731MB Java=542MB ... Mon Jun 11 13:59:53 UTC 2018 RSS=900MB Java=677MB Mon Jun 11 14:00:53 UTC 2018 RSS=901MB Java=687MB Mon Jun 11 14:01:53 UTC 2018 RSS=900MB Java=677MB
Given long enough the RSS will grow above 2G (as I left a container running overnight with the memory limit on the container being 2G and the container was killed by docker due to RSS growth. Java managed memory is fine.
In the above table, the Java= is the value from -XX:NativeMemoryTracking=detail -XX:+UnlockDiagnosticVMOptions i.e. showing the native memory that Java believes to have been allocated (as distinct from the memory that the process was allocated from the OSS)...
The leak is present if you run with `-Xint`
This leads me to suspect that a FFI / JNDI call is using malloc directly and not releasing the corresponding resources (as all Java use of malloc should be tagged by Native Memory Tracking)
Next steps is probably to try running with jemalloc and see if its profiling can tag the offending call site and walk back from there