Not sure if this will help, but in our /etc/sysconfig/jenkins you can added:
JENKINS_JAVA_OPTIONS="-Djava.awt.headless=true -Dgroovy.use.classvalue=true -Xms4096m -Xmx4096m -XX:MaxPermSize=1024m -XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC -Dhudson.model.ParametersAction.keepUndefinedParameters=true"
We also run some crazy groovy code in some of jenkins jobs and we tend to run out of memory too, so we've installed this plugin to help us track java resources:
And finally, we periodically (once per hour i think) run this groovy script to clean things up:
before = Runtime.getRuntime().totalMemory() - Runtime.getRuntime().freeMemory();
after = Runtime.getRuntime().totalMemory() - Runtime.getRuntime().freeMemory();
println I18N.getFormattedString("ramasse_miette_execute", Math.round((before - after) / 1024));
And we launch it like this:
java -jar jenkins-cli.jar -noCertificateCheck -i id_rsa -s JENKINS_URL groovy my_groovy_script.groovy'
You can probably put that groovy script as a jenkins job have it run periodically.
Ever since we we run that System.gc() command, we've never run out of memory. We run 100s of jobs a day a AWS t2.medium without any downtime for months at a time. Before I did these things, we were running on a huge instance and we had to restart it every week or so.
Hope this helps!