Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-9349

Viewing large console logs with timestamper plugin cause Jenkins to crash

    • Icon: Bug Bug
    • Resolution: Fixed
    • Icon: Critical Critical
    • timestamper-plugin
    • None
    • Running on Solaris 10 (sparc). Tomcat 6.0.29. Jenkins 401 and 404. This was not an issue in Hudson 380. JVM is set to 1.5GB on one system and 2GB on another.

      Run a build that generates a large console output. Say around 10MB or more. View the console log for the build. It should work just fine. Add the timestamper and run the build again (version 0.6). View the console log. Select to view full log. This frequently crashes Jenkins/Tomcat. Sometimes I actually get the whole log. This has occurred after a restart as well as after it has been running for a bit. I don't see the issue when view logs without the timestamper enabled (at least not yet).

          [JENKINS-9349] Viewing large console logs with timestamper plugin cause Jenkins to crash

          Irrespective of whether the timestamper plugin is causing jenkins to use too much memory the error in the log is that the operating system has run out of memory to give to Jenkins

          # A fatal error has been detected by the Java Runtime Environment:
          #
          # java.lang.OutOfMemoryError: requested 59208 bytes for Chunk::new. Out of swap space?
          

          You need to ensure that your operating system has enough physical memory (or swap space) to allocate the memory that you have told Java it can use.

          Richard Mortimer added a comment - Irrespective of whether the timestamper plugin is causing jenkins to use too much memory the error in the log is that the operating system has run out of memory to give to Jenkins # A fatal error has been detected by the Java Runtime Environment: # # java.lang.OutOfMemoryError: requested 59208 bytes for Chunk:: new . Out of swap space? You need to ensure that your operating system has enough physical memory (or swap space) to allocate the memory that you have told Java it can use.

          Jose Sa added a comment -

          That's the thing I don't think that is the problem because this is running on a dedicated SunFire v445 server with 16Gb of Ram + 16Gb of Swap.

          Jenkins was started with -Xmx3500m, but there was just one build running.

          I do have many slaves (15) with 2-3 threads each but they were all idle.

          Right now I just removed the -Xmx flag but I think the default is less thant the limit I've set.

          Jose Sa added a comment - That's the thing I don't think that is the problem because this is running on a dedicated SunFire v445 server with 16Gb of Ram + 16Gb of Swap. Jenkins was started with -Xmx3500m, but there was just one build running. I do have many slaves (15) with 2-3 threads each but they were all idle. Right now I just removed the -Xmx flag but I think the default is less thant the limit I've set.

          Well the problem was that the JVM could not allocate any more memory. The error you posted was the JVM crashing because it could not allocate memory. It maybe that the OS is limiting the amount of memory that the JVM can use (do you have any ulimits or other process limits) set?

          You need to fix that problem before Jenkins can be blamed

          Richard Mortimer added a comment - Well the problem was that the JVM could not allocate any more memory. The error you posted was the JVM crashing because it could not allocate memory. It maybe that the OS is limiting the amount of memory that the JVM can use (do you have any ulimits or other process limits) set? You need to fix that problem before Jenkins can be blamed

          Jose Sa added a comment -

          Does this information helps?

          -bash-3.00$ ulimit -a
          core file size        (blocks, -c) unlimited
          data seg size         (kbytes, -d) unlimited
          file size             (blocks, -f) unlimited
          open files                    (-n) 256
          pipe size          (512 bytes, -p) 10
          stack size            (kbytes, -s) 8192
          cpu time             (seconds, -t) unlimited
          max user processes            (-u) 29995
          virtual memory        (kbytes, -v) unlimited
          

          Jose Sa added a comment - Does this information helps? -bash-3.00$ ulimit -a core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited open files (-n) 256 pipe size (512 bytes, -p) 10 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 29995 virtual memory (kbytes, -v) unlimited

          None of those limits seem to be a problem.

          It is likely to be a real virtual memory shortage in that case. You say you are using Solaris so I presume that you will be using tmpfs for your /tmp storage. In that case it might be that /tmp is getting filled up (does your job use /tmp for any storage? Jenkins may be storing in /tmp too). You likely need to monitor virtual memory/swap usage and /tmp free space and work out what is using the space.

          Until you eliminate local machine configuration issues there isn't a lot that can be attributed to Jenkins.

          Richard Mortimer added a comment - None of those limits seem to be a problem. It is likely to be a real virtual memory shortage in that case. You say you are using Solaris so I presume that you will be using tmpfs for your /tmp storage. In that case it might be that /tmp is getting filled up (does your job use /tmp for any storage? Jenkins may be storing in /tmp too). You likely need to monitor virtual memory/swap usage and /tmp free space and work out what is using the space. Until you eliminate local machine configuration issues there isn't a lot that can be attributed to Jenkins.

          I have a hunch. timestamper uses an exntesion point called console annotation which stores a small compressed gzip stream of data into the log file. It uses a lot of those, because each timestamp it's adding, I believe, is a new annotation.

          If there's a race condition in the write operations, it could corrupt the gzip encoded stream, and I wonder if it can happen just in the right away that breaks the decompressor like this.

          If my hypothesis is correct, trying to look at the problematic build log will consistently reproduce the problem (as opposed to "sometimes I can see build #15 but some times it crashes when looking at the same build #15.)

          If this fits the issue you are seeing, please attach your console output (or send it personally to me if it's sensitive), and that'd greatly simplify our efforts to fix the issue.

          Kohsuke Kawaguchi added a comment - I have a hunch. timestamper uses an exntesion point called console annotation which stores a small compressed gzip stream of data into the log file. It uses a lot of those, because each timestamp it's adding, I believe, is a new annotation. If there's a race condition in the write operations, it could corrupt the gzip encoded stream, and I wonder if it can happen just in the right away that breaks the decompressor like this. If my hypothesis is correct, trying to look at the problematic build log will consistently reproduce the problem (as opposed to "sometimes I can see build #15 but some times it crashes when looking at the same build #15.) If this fits the issue you are seeing, please attach your console output (or send it personally to me if it's sensitive), and that'd greatly simplify our efforts to fix the issue.

          I take it back. Neither 32756 nor 59208 is a big memory chunk.

          Kohsuke Kawaguchi added a comment - I take it back. Neither 32756 nor 59208 is a big memory chunk.

          Code changed in jenkins
          User: Kohsuke Kawaguchi
          Path:
          changelog.html
          core/src/main/java/hudson/console/AnnotatedLargeText.java
          core/src/main/java/hudson/console/ConsoleNote.java
          http://jenkins-ci.org/commit/jenkins/313fb5940f63ca7e13281f4bba1cdbcfcfd8f2c3
          Log:
          [FIXED JENKINS-9349] let GZipInputStream release its native memory
          eagerly.

          Compare: https://github.com/jenkinsci/jenkins/compare/e7cc141...313fb59

          SCM/JIRA link daemon added a comment - Code changed in jenkins User: Kohsuke Kawaguchi Path: changelog.html core/src/main/java/hudson/console/AnnotatedLargeText.java core/src/main/java/hudson/console/ConsoleNote.java http://jenkins-ci.org/commit/jenkins/313fb5940f63ca7e13281f4bba1cdbcfcfd8f2c3 Log: [FIXED JENKINS-9349] let GZipInputStream release its native memory eagerly. Compare: https://github.com/jenkinsci/jenkins/compare/e7cc141...313fb59

          dogfood added a comment -

          dogfood added a comment - Integrated in jenkins_main_trunk #1267

          Code changed in jenkins
          User: Kohsuke Kawaguchi
          Path:
          changelog.html
          core/src/main/java/hudson/console/AnnotatedLargeText.java
          core/src/main/java/hudson/console/ConsoleNote.java
          http://jenkins-ci.org/commit/jenkins/7aab7ebf317d8ad1c81b72e813c339fbab05dfca
          Log:
          [FIXED JENKINS-9349] let GZipInputStream release its native memory
          eagerly.
          (cherry picked from commit 313fb5940f63ca7e13281f4bba1cdbcfcfd8f2c3)

          Conflicts:

          changelog.html

          SCM/JIRA link daemon added a comment - Code changed in jenkins User: Kohsuke Kawaguchi Path: changelog.html core/src/main/java/hudson/console/AnnotatedLargeText.java core/src/main/java/hudson/console/ConsoleNote.java http://jenkins-ci.org/commit/jenkins/7aab7ebf317d8ad1c81b72e813c339fbab05dfca Log: [FIXED JENKINS-9349] let GZipInputStream release its native memory eagerly. (cherry picked from commit 313fb5940f63ca7e13281f4bba1cdbcfcfd8f2c3) Conflicts: changelog.html

            stevengbrown Steven G Brown
            ford30066 Matthew Ford
            Votes:
            1 Vote for this issue
            Watchers:
            1 Start watching this issue

              Created:
              Updated:
              Resolved: