Not sure if this will help, but in our /etc/sysconfig/jenkins you can added:
JENKINS_JAVA_OPTIONS="-Djava.awt.headless=true -Dgroovy.use.classvalue=true -Xms4096m -Xmx4096m -XX:MaxPermSize=1024m -XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC -Dhudson.model.ParametersAction.keepUndefinedParameters=true"
We also run some crazy groovy code in some of jenkins jobs and we tend to run out of memory too, so we've installed this plugin to help us track java resources:
https://wiki.jenkins-ci.org/display/JENKINS/Monitoring
And finally, we periodically (once per hour i think) run this groovy script to clean things up:
import net.bull.javamelody.*;
before = Runtime.getRuntime().totalMemory() - Runtime.getRuntime().freeMemory();
System.gc();
after = Runtime.getRuntime().totalMemory() - Runtime.getRuntime().freeMemory();
println I18N.getFormattedString("ramasse_miette_execute", Math.round((before - after) / 1024));
And we launch it like this:
java -jar jenkins-cli.jar -noCertificateCheck -i id_rsa -s JENKINS_URL groovy my_groovy_script.groovy'
You can probably put that groovy script as a jenkins job have it run periodically.
Ever since we we run that System.gc() command, we've never run out of memory. We run 100s of jobs a day a AWS t2.medium without any downtime for months at a time. Before I did these things, we were running on a huge instance and we had to restart it every week or so.
Hope this helps!
From what I see it happens in the Groovy class management logic. So my guess is that it is just another follow-up to the Groovy update to 2.4.8 in 2.47 (for
JENKINS-33358).The pattern is close to
JENKINS-42189CC daspilker and jglick