Not sure if this will help, but in our /etc/sysconfig/jenkins you can added:
JENKINS_JAVA_OPTIONS="-Djava.awt.headless=true -Dgroovy.use.classvalue=true -Xms4096m -Xmx4096m -XX:MaxPermSize=1024m -XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC -Dhudson.model.ParametersAction.keepUndefinedParameters=true"
We also run some crazy groovy code in some of jenkins jobs and we tend to run out of memory too, so we've installed this plugin to help us track java resources:
https://wiki.jenkins-ci.org/display/JENKINS/Monitoring
And finally, we periodically (once per hour i think) run this groovy script to clean things up:
import net.bull.javamelody.*;
before = Runtime.getRuntime().totalMemory() - Runtime.getRuntime().freeMemory();
System.gc();
after = Runtime.getRuntime().totalMemory() - Runtime.getRuntime().freeMemory();
println I18N.getFormattedString("ramasse_miette_execute", Math.round((before - after) / 1024));
And we launch it like this:
java -jar jenkins-cli.jar -noCertificateCheck -i id_rsa -s JENKINS_URL groovy my_groovy_script.groovy'
You can probably put that groovy script as a jenkins job have it run periodically.
Ever since we we run that System.gc() command, we've never run out of memory. We run 100s of jobs a day a AWS t2.medium without any downtime for months at a time. Before I did these things, we were running on a huge instance and we had to restart it every week or so.
Hope this helps!
heikkisi We understand this is a critical issue for you, and in order to solve it, can you provide an isolated example that will reproduce this without depending on your infra?
I think the reason we're hesitant to update Groovy more is that every update seems to introduce a new issue of this sort, which generally requires a significant time investment from someone deeply specialized in this area.