I noticed the same error today on 1.450 w/ perforce plugin 1.3.7. In my case, it isn't caused by huge changelogs - at least that I can tell. We don't have huge changelogs. We do have 2 slaves and all polling is done on the master. Here is the stack trace:
Apr 1, 2012 7:48:48 PM hudson.triggers.SCMTrigger$Runner runPolling
SEVERE: Failed to record SCM polling
java.lang.OutOfMemoryError: Java heap space
at hudson.remoting.FastPipedInputStream.<init>(FastPipedInputStream.java:78)
at hudson.remoting.FastPipedInputStream.<init>(FastPipedInputStream.java:66)
at hudson.plugins.perforce.HudsonP4RemoteExecutor.exec(HudsonP4RemoteExecutor.java:97)
at com.tek42.perforce.parse.AbstractPerforceTemplate.getPerforceResponse(AbstractPerforceTemplate.java:321)
at com.tek42.perforce.parse.AbstractPerforceTemplate.getPerforceResponse(AbstractPerforceTemplate.java:292)
at com.tek42.perforce.parse.Workspaces.getWorkspace(Workspaces.java:54)
at hudson.plugins.perforce.PerforceSCM.getPerforceWorkspace(PerforceSCM.java:1208)
at hudson.plugins.perforce.PerforceSCM.compareRemoteRevisionWith(PerforceSCM.java:903)
at hudson.scm.SCM._compareRemoteRevisionWith(SCM.java:356)
at hudson.scm.SCM.poll(SCM.java:373)
at hudson.model.AbstractProject.poll(AbstractProject.java:1323)
at hudson.triggers.SCMTrigger$Runner.runPolling(SCMTrigger.java:420)
at hudson.triggers.SCMTrigger$Runner.run(SCMTrigger.java:449)
at hudson.util.SequentialExecutionQueue$QueueEntry.run(SequentialExecutionQueue.java:118)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
We do keep Jenkins running without periodic reboots so I'm thinking there may be a memory leak somewhere as well. If anyone can give me hints how to get more information, I'll try to provide it.
-tim
Limiting the number of files is probably the easiest option here. Though another option is to somehow stream the information from disk on-demand, rather than deserializing the entire thing into the heap.
For now, what would a good limit be?