-
Bug
-
Resolution: Fixed
-
Critical
On our jenkins (1.403) the following exception occured (only a restart can the system bring back to a normal behaviour):
java.lang.ArrayIndexOutOfBoundsException: -255
at hudson.util.RingBufferLogHandler.publish(RingBufferLogHandler.java:52)
at java.util.logging.Logger.log(Unknown Source)
at java.util.logging.Logger.doLog(Unknown Source)
at java.util.logging.Logger.log(Unknown Source)
at java.util.logging.Logger.fine(Unknown Source)
at hudson.security.SidACL.hasPermission(SidACL.java:54)
at hudson.security.ACL.checkPermission(ACL.java:52)
at hudson.model.Node.checkPermission(Node.java:316)
at hudson.model.Hudson.getTarget(Hudson.java:3409)
at org.kohsuke.stapler.Stapler.tryInvoke(Stapler.java:497)
at org.kohsuke.stapler.Stapler.invoke(Stapler.java:640)
at org.kohsuke.stapler.Stapler.invoke(Stapler.java:478)
at org.kohsuke.stapler.Stapler.service(Stapler.java:160)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:803)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
... more
After that all requests to jenkins fail with the same exception.
The source of RingBufferLogHandler shows the usage of the primitive int's start and size which are incremented but never decremented ...
private int start = 0;
private final LogRecord[] records;
private volatile int size = 0;
...
public synchronized void publish(LogRecord record) {
int len = records.length;
records[(start+size)%len]=record;
if(size==len)
else
{ size++; }}
So after a time an overflow occures and cause the ArrayIndexOutOfBoundsException.
- is related to
-
JENKINS-20863 Integer overflow in SupportLogHandler
-
- Closed
-
Using 1.516 with Windows Server 2008 R2, JDK 1.7 Update 21 64Bit.
Maybe its a problem of excessive jenkins usage? We currently use a setup of 1 master and 12 slaves. All with the same jdk. We use a jnlp connection to connect slave and master.
This problem occurs nearly every day, although that we restart all master and slave VMs once a day.
Is there a possible fix or workaround?