Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-9120

RingBufferLogHandler throws ArrayIndexOutOfBoundsException after int-overflow

    • Icon: Bug Bug
    • Resolution: Fixed
    • Icon: Critical Critical
    • core

      On our jenkins (1.403) the following exception occured (only a restart can the system bring back to a normal behaviour):

      java.lang.ArrayIndexOutOfBoundsException: -255
      at hudson.util.RingBufferLogHandler.publish(RingBufferLogHandler.java:52)
      at java.util.logging.Logger.log(Unknown Source)
      at java.util.logging.Logger.doLog(Unknown Source)
      at java.util.logging.Logger.log(Unknown Source)
      at java.util.logging.Logger.fine(Unknown Source)
      at hudson.security.SidACL.hasPermission(SidACL.java:54)
      at hudson.security.ACL.checkPermission(ACL.java:52)
      at hudson.model.Node.checkPermission(Node.java:316)
      at hudson.model.Hudson.getTarget(Hudson.java:3409)
      at org.kohsuke.stapler.Stapler.tryInvoke(Stapler.java:497)
      at org.kohsuke.stapler.Stapler.invoke(Stapler.java:640)
      at org.kohsuke.stapler.Stapler.invoke(Stapler.java:478)
      at org.kohsuke.stapler.Stapler.service(Stapler.java:160)
      at javax.servlet.http.HttpServlet.service(HttpServlet.java:803)
      at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
      at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
      ... more

      After that all requests to jenkins fail with the same exception.
      The source of RingBufferLogHandler shows the usage of the primitive int's start and size which are incremented but never decremented ...

      private int start = 0;
      private final LogRecord[] records;
      private volatile int size = 0;

      ...

      public synchronized void publish(LogRecord record) {
      int len = records.length;
      records[(start+size)%len]=record;
      if(size==len)

      { start++; }

      else

      { size++; }

      }

      So after a time an overflow occures and cause the ArrayIndexOutOfBoundsException.

          [JENKINS-9120] RingBufferLogHandler throws ArrayIndexOutOfBoundsException after int-overflow

          vdupain added a comment -

          Same issue on our jenkins (1.499), the stacktrace:

          Jan 22, 2013 4:11:23 AM winstone.Logger logInternal
          WARNING: Untrapped Error in Servlet
          java.lang.ArrayIndexOutOfBoundsException: -255
          at hudson.util.RingBufferLogHandler.publish(RingBufferLogHandler.java:52)
          at java.util.logging.Logger.log(Logger.java:478)
          at java.util.logging.Logger.doLog(Logger.java:500)
          at java.util.logging.Logger.logp(Logger.java:616)
          at java.util.logging.Logger.entering(Logger.java:872)
          at hudson.security.HudsonFilter.doFilter(HudsonFilter.java:155)
          at winstone.FilterConfiguration.execute(FilterConfiguration.java:194)
          at winstone.RequestDispatcher.doFilter(RequestDispatcher.java:366)
          at org.kohsuke.stapler.compression.CompressionFilter.doFilter(CompressionFilter.java:50)
          at winstone.FilterConfiguration.execute(FilterConfiguration.java:194)
          at winstone.RequestDispatcher.doFilter(RequestDispatcher.java:366)
          at hudson.util.CharacterEncodingFilter.doFilter(CharacterEncodingFilter.java:81)
          at winstone.FilterConfiguration.execute(FilterConfiguration.java:194)
          at winstone.RequestDispatcher.doFilter(RequestDispatcher.java:366)
          at winstone.RequestDispatcher.forward(RequestDispatcher.java:331)
          at winstone.RequestHandlerThread.processRequest(RequestHandlerThread.java:215)
          at winstone.RequestHandlerThread.run(RequestHandlerThread.java:138)
          at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
          at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
          at java.util.concurrent.FutureTask.run(FutureTask.java:138)
          at winstone.BoundedExecutorService$1.run(BoundedExecutorService.java:77)
          at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
          at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
          at java.lang.Thread.run(Thread.java:662)

          vdupain added a comment - Same issue on our jenkins (1.499), the stacktrace: Jan 22, 2013 4:11:23 AM winstone.Logger logInternal WARNING: Untrapped Error in Servlet java.lang.ArrayIndexOutOfBoundsException: -255 at hudson.util.RingBufferLogHandler.publish(RingBufferLogHandler.java:52) at java.util.logging.Logger.log(Logger.java:478) at java.util.logging.Logger.doLog(Logger.java:500) at java.util.logging.Logger.logp(Logger.java:616) at java.util.logging.Logger.entering(Logger.java:872) at hudson.security.HudsonFilter.doFilter(HudsonFilter.java:155) at winstone.FilterConfiguration.execute(FilterConfiguration.java:194) at winstone.RequestDispatcher.doFilter(RequestDispatcher.java:366) at org.kohsuke.stapler.compression.CompressionFilter.doFilter(CompressionFilter.java:50) at winstone.FilterConfiguration.execute(FilterConfiguration.java:194) at winstone.RequestDispatcher.doFilter(RequestDispatcher.java:366) at hudson.util.CharacterEncodingFilter.doFilter(CharacterEncodingFilter.java:81) at winstone.FilterConfiguration.execute(FilterConfiguration.java:194) at winstone.RequestDispatcher.doFilter(RequestDispatcher.java:366) at winstone.RequestDispatcher.forward(RequestDispatcher.java:331) at winstone.RequestHandlerThread.processRequest(RequestHandlerThread.java:215) at winstone.RequestHandlerThread.run(RequestHandlerThread.java:138) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at winstone.BoundedExecutorService$1.run(BoundedExecutorService.java:77) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662)

          Alex T added a comment - - edited

          Same error on jenkins-1.505-1.1:

          INFO: Started Workspace clean-up
          Exception in thread "Workspace clean-up thread" java.lang.ArrayIndexOutOfBoundsException: -255
          at hudson.util.RingBufferLogHandler.publish(RingBufferLogHandler.java:52)
          at java.util.logging.Logger.log(Logger.java:476)
          at java.util.logging.Logger.doLog(Logger.java:498)
          at java.util.logging.Logger.log(Logger.java:521)
          at hudson.model.AsyncPeriodicWork$1.run(AsyncPeriodicWork.java:46)
          at java.lang.Thread.run(Thread.java:636)
          Apr 17, 2013 1:46:38 PM hudson.triggers.SCMTrigger$Runner run
          INFO: SCM changes detected in (removed). Triggering #1263
          Apr 17, 2013 1:48:00 PM winstone.Logger logInternal
          SEVERE: Error within request handler thread
          java.lang.ArrayIndexOutOfBoundsException: -255
          at hudson.util.RingBufferLogHandler.publish(RingBufferLogHandler.java:52)
          at java.util.logging.Logger.log(Logger.java:476)
          at java.util.logging.Logger.doLog(Logger.java:498)
          at java.util.logging.Logger.log(Logger.java:587)
          at winstone.Logger.logInternal(Logger.java:157)
          at winstone.Logger.log(Logger.java:183)
          at winstone.ObjectPool.getRequestFromPool(ObjectPool.java:120)
          at winstone.ajp13.Ajp13Listener.allocateRequestResponse(Ajp13Listener.java:158)
          at winstone.RequestHandlerThread.run(RequestHandlerThread.java:69)
          at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
          at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
          at java.util.concurrent.FutureTask.run(FutureTask.java:166)
          at winstone.BoundedExecutorService$1.run(BoundedExecutorService.java:77)
          at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
          at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
          at java.lang.Thread.run(Thread.java:636)

          Alex T added a comment - - edited Same error on jenkins-1.505-1.1: INFO: Started Workspace clean-up Exception in thread "Workspace clean-up thread" java.lang.ArrayIndexOutOfBoundsException: -255 at hudson.util.RingBufferLogHandler.publish(RingBufferLogHandler.java:52) at java.util.logging.Logger.log(Logger.java:476) at java.util.logging.Logger.doLog(Logger.java:498) at java.util.logging.Logger.log(Logger.java:521) at hudson.model.AsyncPeriodicWork$1.run(AsyncPeriodicWork.java:46) at java.lang.Thread.run(Thread.java:636) Apr 17, 2013 1:46:38 PM hudson.triggers.SCMTrigger$Runner run INFO: SCM changes detected in (removed). Triggering #1263 Apr 17, 2013 1:48:00 PM winstone.Logger logInternal SEVERE: Error within request handler thread java.lang.ArrayIndexOutOfBoundsException: -255 at hudson.util.RingBufferLogHandler.publish(RingBufferLogHandler.java:52) at java.util.logging.Logger.log(Logger.java:476) at java.util.logging.Logger.doLog(Logger.java:498) at java.util.logging.Logger.log(Logger.java:587) at winstone.Logger.logInternal(Logger.java:157) at winstone.Logger.log(Logger.java:183) at winstone.ObjectPool.getRequestFromPool(ObjectPool.java:120) at winstone.ajp13.Ajp13Listener.allocateRequestResponse(Ajp13Listener.java:158) at winstone.RequestHandlerThread.run(RequestHandlerThread.java:69) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at winstone.BoundedExecutorService$1.run(BoundedExecutorService.java:77) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:636)

          Same problem here. Using 1.516 with Windows Server 2008 R2, JDK 1.7 Update 21 64Bit.

          ?[0m?[31mJun 11, 2013 2:21:56 PM winstone.Logger logInternal
          SEVERE: Error within request handler thread
          java.lang.ArrayIndexOutOfBoundsException: -255

          The log is full with this exceptions. The only possibility to bring Jenkins back up, is to restart it.

          Sebastian Laag added a comment - Same problem here. Using 1.516 with Windows Server 2008 R2, JDK 1.7 Update 21 64Bit. ?[0m?[31mJun 11, 2013 2:21:56 PM winstone.Logger logInternal SEVERE: Error within request handler thread java.lang.ArrayIndexOutOfBoundsException: -255 The log is full with this exceptions. The only possibility to bring Jenkins back up, is to restart it.

          Liya Katz added a comment -

          Jenkins 1.511 on Centos 5.6 - same problem.
          Log is full of these exceptions and Jenkins is stuck. Only restarting service solves the issue.

          Liya Katz added a comment - Jenkins 1.511 on Centos 5.6 - same problem. Log is full of these exceptions and Jenkins is stuck. Only restarting service solves the issue.

          Croesus Kall added a comment - - edited

          Same problem:
          Jenkins 1.518, RHEL6.4, running behind apache 2.2.15
          Happens twice a week - only a restart can fix.
          (attached stacktrace: JENKINS-9120-ringbuffer-stacktrace-cr.log )

          Croesus Kall added a comment - - edited Same problem: Jenkins 1.518, RHEL6.4, running behind apache 2.2.15 Happens twice a week - only a restart can fix. (attached stacktrace: JENKINS-9120-ringbuffer-stacktrace-cr.log )

          Using 1.516 with Windows Server 2008 R2, JDK 1.7 Update 21 64Bit.

          Maybe its a problem of excessive jenkins usage? We currently use a setup of 1 master and 12 slaves. All with the same jdk. We use a jnlp connection to connect slave and master.

          This problem occurs nearly every day, although that we restart all master and slave VMs once a day.

          Is there a possible fix or workaround?

          Sebastian Laag added a comment - Using 1.516 with Windows Server 2008 R2, JDK 1.7 Update 21 64Bit. Maybe its a problem of excessive jenkins usage? We currently use a setup of 1 master and 12 slaves. All with the same jdk. We use a jnlp connection to connect slave and master. This problem occurs nearly every day, although that we restart all master and slave VMs once a day. Is there a possible fix or workaround?

          Marco Rothe added a comment -

          > Maybe its a problem of excessive jenkins usage ?

          IMO it is a "problem" of excessive logging. We encountered the problem in 2011 with DEBUG logging enabled on some categories.

          I assume if you do only warn/error logging the problem may occures nearly never ... but: The bug exists and is an accident that waits to happen!

          Marco Rothe added a comment - > Maybe its a problem of excessive jenkins usage ? IMO it is a "problem" of excessive logging. We encountered the problem in 2011 with DEBUG logging enabled on some categories. I assume if you do only warn/error logging the problem may occures nearly never ... but: The bug exists and is an accident that waits to happen!

          Richard Pon added a comment -

          We just encountered this issue this week. It's happened twice now, about every three days. Any ideas on which logs contribute to the problem the most?

          I changed my log levels to warning and info, hoping that will be a good work around. Any ideas would be great. Thanks.

          Richard Pon added a comment - We just encountered this issue this week. It's happened twice now, about every three days. Any ideas on which logs contribute to the problem the most? I changed my log levels to warning and info, hoping that will be a good work around. Any ideas would be great. Thanks.

          Richard Pon added a comment - - edited

          After some investigation, we found that a large number of calls to the Jenkins API caused this issue to occur in our system. Not sure how API calls are logged. We estimate that this failure occurred after about 37 million api requests.

          Richard Pon added a comment - - edited After some investigation, we found that a large number of calls to the Jenkins API caused this issue to occur in our system. Not sure how API calls are logged. We estimate that this failure occurred after about 37 million api requests.

          Code changed in jenkins
          User: Josh Gibbs
          Path:
          core/src/main/java/hudson/util/RingBufferLogHandler.java
          http://jenkins-ci.org/commit/jenkins/67838866c6c851adc8e39dbd20ff81a434cf6cc6
          Log:
          [FIXED JENKINS-9120] RingBufferLogHandler has int that needs to be reset.

          SCM/JIRA link daemon added a comment - Code changed in jenkins User: Josh Gibbs Path: core/src/main/java/hudson/util/RingBufferLogHandler.java http://jenkins-ci.org/commit/jenkins/67838866c6c851adc8e39dbd20ff81a434cf6cc6 Log: [FIXED JENKINS-9120] RingBufferLogHandler has int that needs to be reset.

          Code changed in jenkins
          User: Oliver Gondža
          Path:
          test/src/test/java/hudson/util/RingBufferLogHandlerTest.java
          http://jenkins-ci.org/commit/jenkins/777d232053488b2dcedc122529283c6e69e99c95
          Log:
          JENKINS-9120 Add test

          SCM/JIRA link daemon added a comment - Code changed in jenkins User: Oliver Gondža Path: test/src/test/java/hudson/util/RingBufferLogHandlerTest.java http://jenkins-ci.org/commit/jenkins/777d232053488b2dcedc122529283c6e69e99c95 Log: JENKINS-9120 Add test

          dogfood added a comment -

          Integrated in jenkins_main_trunk #3086
          [FIXED JENKINS-9120] RingBufferLogHandler has int that needs to be reset. (Revision 67838866c6c851adc8e39dbd20ff81a434cf6cc6)
          JENKINS-9120 Add test (Revision 777d232053488b2dcedc122529283c6e69e99c95)

          Result = SUCCESS
          ogondza : 67838866c6c851adc8e39dbd20ff81a434cf6cc6
          Files :

          • core/src/main/java/hudson/util/RingBufferLogHandler.java

          ogondza : 777d232053488b2dcedc122529283c6e69e99c95
          Files :

          • test/src/test/java/hudson/util/RingBufferLogHandlerTest.java

          dogfood added a comment - Integrated in jenkins_main_trunk #3086 [FIXED JENKINS-9120] RingBufferLogHandler has int that needs to be reset. (Revision 67838866c6c851adc8e39dbd20ff81a434cf6cc6) JENKINS-9120 Add test (Revision 777d232053488b2dcedc122529283c6e69e99c95) Result = SUCCESS ogondza : 67838866c6c851adc8e39dbd20ff81a434cf6cc6 Files : core/src/main/java/hudson/util/RingBufferLogHandler.java ogondza : 777d232053488b2dcedc122529283c6e69e99c95 Files : test/src/test/java/hudson/util/RingBufferLogHandlerTest.java

          Code changed in jenkins
          User: Oliver Gondža
          Path:
          test/src/test/java/hudson/util/RingBufferLogHandlerTest.java
          http://jenkins-ci.org/commit/jenkins/754ef75447d83767b4485ee8b15bb097f65838c4
          Log:
          JENKINS-9120 Add test

          (cherry picked from commit 777d232053488b2dcedc122529283c6e69e99c95)

          SCM/JIRA link daemon added a comment - Code changed in jenkins User: Oliver Gondža Path: test/src/test/java/hudson/util/RingBufferLogHandlerTest.java http://jenkins-ci.org/commit/jenkins/754ef75447d83767b4485ee8b15bb097f65838c4 Log: JENKINS-9120 Add test (cherry picked from commit 777d232053488b2dcedc122529283c6e69e99c95)

          Code changed in jenkins
          User: Josh Gibbs
          Path:
          core/src/main/java/hudson/util/RingBufferLogHandler.java
          http://jenkins-ci.org/commit/jenkins/9cc2bc21307326fd8f92a0c67401d17357ab0390
          Log:
          [FIXED JENKINS-9120] RingBufferLogHandler has int that needs to be reset.

          (cherry picked from commit 67838866c6c851adc8e39dbd20ff81a434cf6cc6)

          SCM/JIRA link daemon added a comment - Code changed in jenkins User: Josh Gibbs Path: core/src/main/java/hudson/util/RingBufferLogHandler.java http://jenkins-ci.org/commit/jenkins/9cc2bc21307326fd8f92a0c67401d17357ab0390 Log: [FIXED JENKINS-9120] RingBufferLogHandler has int that needs to be reset. (cherry picked from commit 67838866c6c851adc8e39dbd20ff81a434cf6cc6)

          pavan kumar added a comment -

          Hi ALex,

          I am encountering a pretty similar issue as many of our jenkins builds stopped triggering at scheduled times and we have no idea about what could be the root cause of it. We restarted our Jenkins master many times but no luck and i dont see anything unusual errors too in the logs. Please help us to overcome this issue

          pavan kumar added a comment - Hi ALex, I am encountering a pretty similar issue as many of our jenkins builds stopped triggering at scheduled times and we have no idea about what could be the root cause of it. We restarted our Jenkins master many times but no luck and i dont see anything unusual errors too in the logs. Please help us to overcome this issue

            Unassigned Unassigned
            marco_rothe Marco Rothe
            Votes:
            18 Vote for this issue
            Watchers:
            14 Start watching this issue

              Created:
              Updated:
              Resolved: