Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-13617

64-bit java.lang.OutOfMemoryError: PermGen space

    • Icon: Bug Bug
    • Resolution: Fixed
    • Icon: Major Major
    • core

      Even with -XX:PermSize=512M I still get java.lang.OutOfMemoryError: PermGen space about once a day with light load. Our 32-bit Jenkins has never had this problem and no special settings. Memory leak?

      Apr 26, 2012 9:56:34 AM winstone.Logger logInternal
      WARNING: Untrapped Error in Servlet
      java.lang.OutOfMemoryError: PermGen space
      at java.lang.Throwable.getStackTraceElement(Native Method)
      at java.lang.Throwable.getOurStackTrace(Throwable.java:591)
      at java.lang.Throwable.printStackTraceAsCause(Throwable.java:529)
      at java.lang.Throwable.printStackTraceAsCause(Throwable.java:545)
      at java.lang.Throwable.printStackTraceAsCause(Throwable.java:545)
      at java.lang.Throwable.printStackTrace(Throwable.java:516)
      at net.bull.javamelody.MonitoringFilter.doFilter(MonitoringFilter.java:224)
      at net.bull.javamelody.MonitoringFilter.doFilter(MonitoringFilter.java:171)
      at net.bull.javamelody.PluginMonitoringFilter.doFilter(PluginMonitoringFilter.java:86)
      at org.jvnet.hudson.plugins.monitoring.HudsonMonitoringFilter.doFilter(HudsonMonitoringFilter.java:84)
      at hudson.util.PluginServletFilter$1.doFilter(PluginServletFilter.java:98)
      at hudson.plugins.greenballs.GreenBallFilter.doFilter(GreenBallFilter.java:74)
      at hudson.util.PluginServletFilter$1.doFilter(PluginServletFilter.java:98)
      at hudson.util.PluginServletFilter.doFilter(PluginServletFilter.java:87)
      at winstone.FilterConfiguration.execute(FilterConfiguration.java:194)
      at winstone.RequestDispatcher.doFilter(RequestDispatcher.java:366)
      at hudson.security.csrf.CrumbFilter.doFilter(CrumbFilter.java:47)
      at winstone.FilterConfiguration.execute(FilterConfiguration.java:194)
      at winstone.RequestDispatcher.doFilter(RequestDispatcher.java:366)
      at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:84)
      at hudson.security.ChainedServletFilter.doFilter(ChainedServletFilter.java:76)
      at hudson.security.HudsonFilter.doFilter(HudsonFilter.java:164)
      at winstone.FilterConfiguration.execute(FilterConfiguration.java:194)
      at winstone.RequestDispatcher.doFilter(RequestDispatcher.java:366)
      at hudson.util.CharacterEncodingFilter.doFilter(CharacterEncodingFilter.java:81)
      at winstone.FilterConfiguration.execute(FilterConfiguration.java:194)
      at winstone.RequestDispatcher.doFilter(RequestDispatcher.java:366)
      at winstone.RequestDispatcher.forward(RequestDispatcher.java:331)
      at winstone.RequestHandlerThread.processRequest(RequestHandlerThread.java:215)
      at winstone.RequestHandlerThread.run(RequestHandlerThread.java:138)
      at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
      at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
      Apr 26, 2012 9:56:37 AM winstone.Logger logInternal
      WARNING: Untrapped Error in Servlet
      java.lang.OutOfMemoryError: PermGen space
      Apr 26, 2012 9:56:50 AM hudson.triggers.SafeTimerTask run
      SEVERE: Timer task hudson.model.LoadStatistics$LoadStatisticsUpdater@2b1c2043 failed
      java.lang.OutOfMemoryError: PermGen space

        1. memory.dump.bz2
          8.53 MB
        2. periodicbackup.hpi
          1.67 MB
        3. periodicbackup.hpi
          1.67 MB
        4. memory.dump.bz2
          8.54 MB

          [JENKINS-13617] 64-bit java.lang.OutOfMemoryError: PermGen space

          wgracelee added a comment -

          We're getting this error at least once per day after we upgraded to 1.463. In its previous version, 1.456, we hardly saw it. The underneath os is Centos 5.5, 64bit.

          wgracelee added a comment - We're getting this error at least once per day after we upgraded to 1.463. In its previous version, 1.456, we hardly saw it. The underneath os is Centos 5.5, 64bit.

          wgracelee added a comment -

          And our Jenkins is being run as:
          java -XX:NewSize=256m -XX:MaxNewSize=256m -XX:SurvivorRatio=8 -XX:+UseConcMarkSweepGC -XX:+CMSPermGenSweepingEnabled -XX:+CMSClassUnloadingEnabled -Xms... -Xmx... -jar ...

          wgracelee added a comment - And our Jenkins is being run as: java -XX:NewSize=256m -XX:MaxNewSize=256m -XX:SurvivorRatio=8 -XX:+UseConcMarkSweepGC -XX:+CMSPermGenSweepingEnabled -XX:+CMSClassUnloadingEnabled -Xms... -Xmx... -jar ...

          evernat added a comment -

          and you have not set MaxPermSize?

          evernat added a comment - and you have not set MaxPermSize?

          wgracelee added a comment -

          Nop. I'll set it to 512m to see how it turned out.

          wgracelee added a comment - Nop. I'll set it to 512m to see how it turned out.

          Frédéric Camblor added a comment - - edited

          Seems like I encounter same issue, once a day (I restart, and it solves the problem, 'til the next day).

          I'm facing it on 1.466.1 (LTS) under ubuntu with automatic package installer.

          Frédéric Camblor added a comment - - edited Seems like I encounter same issue, once a day (I restart, and it solves the problem, 'til the next day). I'm facing it on 1.466.1 (LTS) under ubuntu with automatic package installer.

          I think we faced with the same problem. Our Jenkins crashes every 2 days.
          Configuration :
          Jenkins 1.478 running on AIX 5.3, IBM Java 6, 5 slaves (RHEL-5 and Windows), and near 100 actives jobs.

          When analizing memory with Javamelody monitoring, it seems that permgen (native memory in case of IBM) is constantly increasing.

          We have not yet found the cause, but we noticed that it often blocks at fixed hours (4:00 pm, 6:00 pm, ...). Our doubts are on plugins or features performing heavy tasks periodically.

          Michael Pailloncy added a comment - I think we faced with the same problem. Our Jenkins crashes every 2 days. Configuration : Jenkins 1.478 running on AIX 5.3, IBM Java 6, 5 slaves (RHEL-5 and Windows), and near 100 actives jobs. When analizing memory with Javamelody monitoring, it seems that permgen (native memory in case of IBM) is constantly increasing. We have not yet found the cause, but we noticed that it often blocks at fixed hours (4:00 pm, 6:00 pm, ...). Our doubts are on plugins or features performing heavy tasks periodically.

          If you suspect you have a leak (in current case in permanent generation) then you can try to use Plumbr with your Jenkins instance: http://plumbr.eu/blog/plumbr-1-1-we-now-find-permgen-leaks

          Disclaimer: I am Plumbr's developer

          Nikita Salnikov-Tarnovski added a comment - If you suspect you have a leak (in current case in permanent generation) then you can try to use Plumbr with your Jenkins instance: http://plumbr.eu/blog/plumbr-1-1-we-now-find-permgen-leaks Disclaimer: I am Plumbr's developer

          Steve Roth added a comment -

          me too, on 1.491

          Steve Roth added a comment - me too, on 1.491

          Niklaus Giger added a comment -

          Hi everyone. I need this bug/feature so much that I'm willing to pay 100.00 bucks for it.
          This offer is registered at FreedomSponsors (http://www.freedomsponsors.org/core/issue/92/64-bit-javalangoutofmemoryerror-permgen-space).
          Once you solve it (according to the acceptance criteria described there), just create a FreedomSponsors account and mark it as resolved (oh, you'll need a Paypal account too)
          I'll then check it out and will gladly pay up!

          If anyone else would like to throw in a few bucks to elevate the priority on this issue, you should check out FreedomSponsors!

          Niklaus Giger added a comment - Hi everyone. I need this bug/feature so much that I'm willing to pay 100.00 bucks for it. This offer is registered at FreedomSponsors ( http://www.freedomsponsors.org/core/issue/92/64-bit-javalangoutofmemoryerror-permgen-space ). Once you solve it (according to the acceptance criteria described there), just create a FreedomSponsors account and mark it as resolved (oh, you'll need a Paypal account too) I'll then check it out and will gladly pay up! If anyone else would like to throw in a few bucks to elevate the priority on this issue, you should check out FreedomSponsors!

          Niklaus Giger added a comment -

          I have this issue frequently on my CI http://ngiger.dyndns.org/jenkins/. I changed my projects to poll only at 3 minutes after the hour.

          I see the problem mostly when one of my slaves (1 Windows, 1 MacOSx, 1 Debian/Lenny) does some work. If I clean a workspace on one of these slaves. I get very often the PermGen error.

          Niklaus Giger added a comment - I have this issue frequently on my CI http://ngiger.dyndns.org/jenkins/ . I changed my projects to poll only at 3 minutes after the hour. I see the problem mostly when one of my slaves (1 Windows, 1 MacOSx, 1 Debian/Lenny) does some work. If I clean a workspace on one of these slaves. I get very often the PermGen error.

          Johno Crawford added a comment - - edited

          To help debug this issue add the VM flag -XX:-HeapDumpOnOutOfMemoryError and restart Jenkins. The next time an OOM occurs, upload java_pidpid.hprof for analysis.

          Johno Crawford added a comment - - edited To help debug this issue add the VM flag -XX:-HeapDumpOnOutOfMemoryError and restart Jenkins. The next time an OOM occurs, upload java_pidpid.hprof for analysis.

          Niklaus Giger added a comment - - edited

          Under my Debian-squeeze installation I modified /etc/default/jenkins. Now jenkins is started with the following arguments:

          /usr/bin/java -jar /usr/share/jenkins/jenkins.war --webroot=/var/run/jenkins/war -XX:-HeapDumpOnOutOfMemoryError --httpPort=8080 --ajp13Port=-1 --prefix=/jenkins
          I find various occurrences of "java.lang.OutOfMemoryError: PermGen space" in my /var/log/jenkins/jenkins.log but was unable to find any trace of a java_pidpid.hprof. Where should it land? Anything different I should change?

          Niklaus Giger added a comment - - edited Under my Debian-squeeze installation I modified /etc/default/jenkins. Now jenkins is started with the following arguments: /usr/bin/java -jar /usr/share/jenkins/jenkins.war --webroot=/var/run/jenkins/war -XX:-HeapDumpOnOutOfMemoryError --httpPort=8080 --ajp13Port=-1 --prefix=/jenkins I find various occurrences of "java.lang.OutOfMemoryError: PermGen space" in my /var/log/jenkins/jenkins.log but was unable to find any trace of a java_pidpid.hprof. Where should it land? Anything different I should change?

          Johno Crawford added a comment - - edited

          Should be in the logs directory, java_pid[actual pid].hprof . If you can still not locate the dump you may consider configuring the path with the -XX:HeapDumpPath switch eg. -XX:HeapDumpPath=/usr/share/jenkins

          Johno Crawford added a comment - - edited Should be in the logs directory, java_pid [actual pid] .hprof . If you can still not locate the dump you may consider configuring the path with the -XX:HeapDumpPath switch eg. -XX:HeapDumpPath=/usr/share/jenkins

          Johno Crawford added a comment - - edited

          No need for jenkins.log at the moment. Heap dump looks okish, top instance count is java.lang.Class, I would recommend adding the switch -XX:+CMSClassUnloadingEnabled . Looking at the thread dump I see it OOMd in the PeriodicBackup ( https://wiki.jenkins-ci.org/display/JENKINS/PeriodicBackup+Plugin ) thread. Would you please try installing my snapshot with updated libraries ( https://issues.jenkins-ci.org/secure/attachment/23010/periodicbackup.hpi ) or disabling the plugin temporarily to see if it helps.

          "PeriodicBackup thread" prio=5 tid=3023 RUNNABLE
          at java.lang.OutOfMemoryError.<init>(OutOfMemoryError.java:25)
          at java.lang.ClassLoader.defineClass1(Native Method)
          at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631)
          ..

          Johno Crawford added a comment - - edited No need for jenkins.log at the moment. Heap dump looks okish, top instance count is java.lang.Class, I would recommend adding the switch -XX:+CMSClassUnloadingEnabled . Looking at the thread dump I see it OOMd in the PeriodicBackup ( https://wiki.jenkins-ci.org/display/JENKINS/PeriodicBackup+Plugin ) thread. Would you please try installing my snapshot with updated libraries ( https://issues.jenkins-ci.org/secure/attachment/23010/periodicbackup.hpi ) or disabling the plugin temporarily to see if it helps. "PeriodicBackup thread" prio=5 tid=3023 RUNNABLE at java.lang.OutOfMemoryError.<init>(OutOfMemoryError.java:25) at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631) ..

          Johno Crawford added a comment - - edited

          I have created a new build of the periodicbackup plugin ( https://issues.jenkins-ci.org/secure/attachment/23010/periodicbackup.hpi ) with the latest plexus-archiver, let me know if it helps!

          Johno Crawford added a comment - - edited I have created a new build of the periodicbackup plugin ( https://issues.jenkins-ci.org/secure/attachment/23010/periodicbackup.hpi ) with the latest plexus-archiver, let me know if it helps!

          I created a PR with the changes I made to PeriodicBackup plugin @ https://github.com/jenkinsci/periodicbackup-plugin/pull/4

          Johno Crawford added a comment - I created a PR with the changes I made to PeriodicBackup plugin @ https://github.com/jenkinsci/periodicbackup-plugin/pull/4

          Updated periodbackup snapshot to include fix for http://jira.codehaus.org/browse/PLXCOMP-194 .

          Johno Crawford added a comment - Updated periodbackup snapshot to include fix for http://jira.codehaus.org/browse/PLXCOMP-194 .

          Niklaus Giger added a comment -

          I added -XX:+CMSClassUnloadingEnabled to my args and installed 23010/periodicbackup.hpi into my plugins folder. The new version shows up under pluginManager/installed.

          But I am getting the following error on a restart.

          INFO: Ignoring /var/lib/jenkins/plugins/periodicbackup.jpi because /var/lib/jenkins/plugins/periodicbackup.hpi is already loaded
          WARNUNG: Failed to scout org.jenkinsci.plugins.periodicbackup.PeriodicBackup
          WARNUNG: Failed to load org.jenkinsci.plugins.periodicbackup.PeriodicBackup
          at org.jenkinsci.plugins.periodicbackup.ZipStorage$DescriptorImpl.<init>(ZipStorage.java:247)
          at org.jenkinsci.plugins.periodicbackup.StorageDescriptor.<init>(StorageDescriptor.java:37)
          at org.jenkinsci.plugins.periodicbackup.ZipStorage$DescriptorImpl.<init>(ZipStorage.java:247)
          at org.jenkinsci.plugins.periodicbackup.ZipStorage$DescriptorImpl$$FastClassByGuice$$56b6552.newInstance(<generated>)
          at org.jenkinsci.plugins.periodicbackup.TarGzStorage$DescriptorImpl.<init>(TarGzStorage.java:132)
          at org.jenkinsci.plugins.periodicbackup.StorageDescriptor.<init>(StorageDescriptor.java:37)
          at org.jenkinsci.plugins.periodicbackup.TarGzStorage$DescriptorImpl.<init>(TarGzStorage.java:132)
          at org.jenkinsci.plugins.periodicbackup.TarGzStorage$DescriptorImpl$$FastClassByGuice$$a3f0ac29.newInstance(<generated>)
          at org.jenkinsci.plugins.periodicbackup.PeriodicBackupLink.load(PeriodicBackupLink.java:185)
          at org.jenkinsci.plugins.periodicbackup.PeriodicBackupLink.<init>(PeriodicBackupLink.java:71)
          at org.jenkinsci.plugins.periodicbackup.PeriodicBackupLink$$FastClassByGuice$$ba9b62be.newInstance(<generated>)

          Niklaus Giger added a comment - I added -XX:+CMSClassUnloadingEnabled to my args and installed 23010/periodicbackup.hpi into my plugins folder. The new version shows up under pluginManager/installed. But I am getting the following error on a restart. INFO: Ignoring /var/lib/jenkins/plugins/periodicbackup.jpi because /var/lib/jenkins/plugins/periodicbackup.hpi is already loaded WARNUNG: Failed to scout org.jenkinsci.plugins.periodicbackup.PeriodicBackup WARNUNG: Failed to load org.jenkinsci.plugins.periodicbackup.PeriodicBackup at org.jenkinsci.plugins.periodicbackup.ZipStorage$DescriptorImpl.<init>(ZipStorage.java:247) at org.jenkinsci.plugins.periodicbackup.StorageDescriptor.<init>(StorageDescriptor.java:37) at org.jenkinsci.plugins.periodicbackup.ZipStorage$DescriptorImpl.<init>(ZipStorage.java:247) at org.jenkinsci.plugins.periodicbackup.ZipStorage$DescriptorImpl$$FastClassByGuice$$56b6552.newInstance(<generated>) at org.jenkinsci.plugins.periodicbackup.TarGzStorage$DescriptorImpl.<init>(TarGzStorage.java:132) at org.jenkinsci.plugins.periodicbackup.StorageDescriptor.<init>(StorageDescriptor.java:37) at org.jenkinsci.plugins.periodicbackup.TarGzStorage$DescriptorImpl.<init>(TarGzStorage.java:132) at org.jenkinsci.plugins.periodicbackup.TarGzStorage$DescriptorImpl$$FastClassByGuice$$a3f0ac29.newInstance(<generated>) at org.jenkinsci.plugins.periodicbackup.PeriodicBackupLink.load(PeriodicBackupLink.java:185) at org.jenkinsci.plugins.periodicbackup.PeriodicBackupLink.<init>(PeriodicBackupLink.java:71) at org.jenkinsci.plugins.periodicbackup.PeriodicBackupLink$$FastClassByGuice$$ba9b62be.newInstance(<generated>)

          Johno Crawford added a comment - - edited

          Looks like a version conflict, try shutting down your Jenkins instance, removing all files beginning with periodicbackup in /var/lib/jenkins/plugins (folder named periodicbackup, periodicbackup.jpi etc.), place a fresh copy of 23010/periodicbackup.hpi to /var/lib/jenkins/plugins/periodicbackup.hpi and then start Jenkins / verify you are running the SNAPSHOT version from plugin manager.

          Johno Crawford added a comment - - edited Looks like a version conflict, try shutting down your Jenkins instance, removing all files beginning with periodicbackup in /var/lib/jenkins/plugins (folder named periodicbackup, periodicbackup.jpi etc.), place a fresh copy of 23010/periodicbackup.hpi to /var/lib/jenkins/plugins/periodicbackup.hpi and then start Jenkins / verify you are running the SNAPSHOT version from plugin manager.

          Niklaus Giger added a comment -

          Memory dump with arguments "-XX:+CMSClassUnloadingEnabled -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/log/jenkins/memory.dump"
          and PeriodicBackup with version
          1.2-SNAPSHOT (private-12/29/2012 17:07-johno)

          Niklaus Giger added a comment - Memory dump with arguments "-XX:+CMSClassUnloadingEnabled -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/log/jenkins/memory.dump" and PeriodicBackup with version 1.2-SNAPSHOT (private-12/29/2012 17:07-johno)

          Johno Crawford added a comment - - edited

          Heap dump looks better than before. This time it blew up at

          "Pipe writer thread: Jenkins-win7" prio=5 tid=16893 RUNNABLE
          at java.lang.OutOfMemoryError.<init>(OutOfMemoryError.java:25)
          at java.lang.Class.getDeclaredConstructors0(Native Method)
          at java.lang.Class.privateGetDeclaredConstructors(Class.java:2389)
          at java.lang.Class.getConstructor0(Class.java:2699)
          at java.lang.Class.newInstance0(Class.java:326)
          Local Variable: java.lang.Class[]#7844
          at java.lang.Class.newInstance(Class.java:308)
          Local Variable: class sun.reflect.GeneratedSerializationConstructorAccessor6164
          at sun.reflect.MethodAccessorGenerator$1.run(MethodAccessorGenerator.java:381)
          Local Variable: sun.reflect.MethodAccessorGenerator$1#1
          at java.security.AccessController.doPrivileged(Native Method)
          at sun.reflect.MethodAccessorGenerator.generate(MethodAccessorGenerator.java:377)
          Local Variable: java.lang.String#60850
          Local Variable: sun.reflect.ByteVectorImpl#1
          Local Variable: java.lang.Class#9
          ..

          What are your current memory settings for the JVM? If they have been omitted please try restarting the JVM with the following flags "-XX:MaxPermSize=256m -Xmx768m -XX:+CMSClassUnloadingEnabled -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/log/jenkins/memory.dump".

          winstone.classLoader.WebappClassLoader seems to be holding on to quite a lot of classes, but this might as well be normal behaviour. To rule out memory leaks with the Winstone container you could also try running Jenkins on Tomcat to see if it helps.

          Johno Crawford added a comment - - edited Heap dump looks better than before. This time it blew up at "Pipe writer thread: Jenkins-win7" prio=5 tid=16893 RUNNABLE at java.lang.OutOfMemoryError.<init>(OutOfMemoryError.java:25) at java.lang.Class.getDeclaredConstructors0(Native Method) at java.lang.Class.privateGetDeclaredConstructors(Class.java:2389) at java.lang.Class.getConstructor0(Class.java:2699) at java.lang.Class.newInstance0(Class.java:326) Local Variable: java.lang.Class[]#7844 at java.lang.Class.newInstance(Class.java:308) Local Variable: class sun.reflect.GeneratedSerializationConstructorAccessor6164 at sun.reflect.MethodAccessorGenerator$1.run(MethodAccessorGenerator.java:381) Local Variable: sun.reflect.MethodAccessorGenerator$1#1 at java.security.AccessController.doPrivileged(Native Method) at sun.reflect.MethodAccessorGenerator.generate(MethodAccessorGenerator.java:377) Local Variable: java.lang.String#60850 Local Variable: sun.reflect.ByteVectorImpl#1 Local Variable: java.lang.Class#9 .. What are your current memory settings for the JVM? If they have been omitted please try restarting the JVM with the following flags "-XX:MaxPermSize=256m -Xmx768m -XX:+CMSClassUnloadingEnabled -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/log/jenkins/memory.dump". winstone.classLoader.WebappClassLoader seems to be holding on to quite a lot of classes, but this might as well be normal behaviour. To rule out memory leaks with the Winstone container you could also try running Jenkins on Tomcat to see if it helps.

          Niklaus Giger added a comment -

          Restarted jenkins with the arguments suggested by you. Will let it run for about two days.
          If I get no memory dumps, I will move the jenkins to run under tomcat6.
          Thanks a lot for your work!

          Niklaus Giger added a comment - Restarted jenkins with the arguments suggested by you. Will let it run for about two days. If I get no memory dumps, I will move the jenkins to run under tomcat6. Thanks a lot for your work!

          Johno Crawford added a comment - - edited

          Niklaus, if there are no memory dumps in that time I think we can rule PeriodicBackup plugin as the culprit and there is no need to move Jenkins to run under Tomcat.

          I have merged the fixes to the PeriodicBackup plugin repo JENKINS-16223 (1.2+ contains the fixes in my snapshot) which should now be available from your plugin manager (you may need to click "Check now" in the advanced tab).

          The exception java.lang.OutOfMemoryError: PermGen space is generic, there is no way to avoid it if there is a class loader memory leak, increasing the max permsize, enabling other switches which modify the JVM GC behaviour will (most of the time) prolong the ticking time bomb.

          For those interested in knowing more about PermGen check out this article: http://frankkieviet.blogspot.ca/2006/10/classloader-leaks-dreaded-permgen-space.html

          If there are others receiving this error after increasing both -XX:MaxPermSize and -Xmx to reasonable values the OOM might be caused by plugin(s) you have installed or the way you use Jenkins (there might be undiscovered memory leaks in core), however without a heap dump to analyse it is practically impossible to debug due to the size of the code base and amount of plugins.

          Adam, instead of using the switch -XX:PermSize you should be using -XX:MaxPermSize, in addition, 64bit JVM will use almost double the amount of memory for object pointers ( https://wikis.oracle.com/display/HotSpotInternals/CompressedOops ). You can try adding -XX:+UseCompressedOops to your JVM args to help, however you may have ran into a memory leak like Niklaus (in which case a heap dump is required).

          Johno Crawford added a comment - - edited Niklaus, if there are no memory dumps in that time I think we can rule PeriodicBackup plugin as the culprit and there is no need to move Jenkins to run under Tomcat. I have merged the fixes to the PeriodicBackup plugin repo JENKINS-16223 (1.2+ contains the fixes in my snapshot) which should now be available from your plugin manager (you may need to click "Check now" in the advanced tab). The exception java.lang.OutOfMemoryError: PermGen space is generic, there is no way to avoid it if there is a class loader memory leak, increasing the max permsize, enabling other switches which modify the JVM GC behaviour will (most of the time) prolong the ticking time bomb. For those interested in knowing more about PermGen check out this article: http://frankkieviet.blogspot.ca/2006/10/classloader-leaks-dreaded-permgen-space.html If there are others receiving this error after increasing both -XX:MaxPermSize and -Xmx to reasonable values the OOM might be caused by plugin(s) you have installed or the way you use Jenkins (there might be undiscovered memory leaks in core), however without a heap dump to analyse it is practically impossible to debug due to the size of the code base and amount of plugins. Adam, instead of using the switch -XX:PermSize you should be using -XX:MaxPermSize, in addition, 64bit JVM will use almost double the amount of memory for object pointers ( https://wikis.oracle.com/display/HotSpotInternals/CompressedOops ). You can try adding -XX:+UseCompressedOops to your JVM args to help, however you may have ran into a memory leak like Niklaus (in which case a heap dump is required).

          Niklaus Giger added a comment -

          Johno, I can confirm that there were no memory dumps in the last three days. I have therefore upgraded jenkins to 1.496 and periodicbackup to 1.3. I will report back in a few days to tell you whether everything is okay with this combination.

          Niklaus Giger added a comment - Johno, I can confirm that there were no memory dumps in the last three days. I have therefore upgraded jenkins to 1.496 and periodicbackup to 1.3. I will report back in a few days to tell you whether everything is okay with this combination.

          How's it looking Niklaus?

          Johno Crawford added a comment - How's it looking Niklaus?

          Niklaus Giger added a comment -

          It is looking good. Running since January 06, 00:37 AM without any PermGen errors.
          It seems that this nasty problem has been fixed for me! I think you are entitled now to collect your 100$ on freedomsponsors.

          Thanks a lot for your help!

          Niklaus Giger added a comment - It is looking good. Running since January 06, 00:37 AM without any PermGen errors. It seems that this nasty problem has been fixed for me! I think you are entitled now to collect your 100$ on freedomsponsors. Thanks a lot for your help!

            johno Johno Crawford
            asloan7 Adam Sloan
            Votes:
            4 Vote for this issue
            Watchers:
            12 Start watching this issue

              Created:
              Updated:
              Resolved: