Maybe not a Jenkins issue.
Having the same issue, reproduced at every build for a given slave (but with nothing relevant in its logs), I tried to disconnect and reconnect the slave:
[05/06/13 14:06:04] Launching slave agent
$ ssh slavedns java -jar ~/bin/slave.jar
<===[JENKINS REMOTING CAPACITY]===<===[JENKINS REMOTING CAPACITY]===>>channel started
channel started
Slave.jar version: 2.22
This is a Unix slave
Slave.jar version: 2.22
This is a Unix slave
Copied maven-agent.jar
Copied maven3-agent.jar
Copied maven3-interceptor.jar
Copied maven-agent.jar
Copied maven-interceptor.jar
Copied maven2.1-interceptor.jar
Copied plexus-classworld.jar
Copied maven3-agent.jar
Copied maven3-interceptor.jar
Copied classworlds.jar
Copied maven-interceptor.jar
Copied maven2.1-interceptor.jar
Copied plexus-classworld.jar
Copied classworlds.jar
Evacuated stdout
Evacuated stdout
ERROR: Unexpected error in launching a slave. This is probably a bug in Jenkins
(...)java.lang.IllegalStateException: Already connected
at hudson.slaves.SlaveComputer.setChannel(SlaveComputer.java:459)
at hudson.slaves.SlaveComputer.setChannel(SlaveComputer.java:339)
at hudson.slaves.CommandLauncher.launch(CommandLauncher.java:122)
at hudson.slaves.SlaveComputer$1.call(SlaveComputer.java:222)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Connection terminated
channel stopped
ERROR: Unexpected error in launching a slave. This is probably a bug in Jenkins
(...)java.lang.NullPointerException
at org.jenkinsci.modules.slave_installer.impl.ComputerListenerImpl.onOnline(ComputerListenerImpl.java:32)
at hudson.slaves.SlaveComputer.setChannel(SlaveComputer.java:471)
at hudson.slaves.SlaveComputer.setChannel(SlaveComputer.java:339)
at hudson.slaves.CommandLauncher.launch(CommandLauncher.java:122)
at hudson.slaves.SlaveComputer$1.call(SlaveComputer.java:222)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
channel stopped
Connection terminated
Then the slaved successfully reconnected itself.
It appeared there was looping thread consuming 100% CPU. Killing the process solved the issue.
Strangely, some system and Java commands were not working (ps, cat, less, jstack, trace, ...) until it has been killed, whereas other commands worked (top, jps, renice, kill, ...). That could explain the weird Jenkins 4s timeout log (java.util.concurrent.TimeoutException: Ping started on 1367743028681 hasn't completed at 1367743268682): the system was partially frozen and with really unstable response times.
Note the looping thread came from a previous job which badly stopped on timeout:
03:00:16.868 Build timed out (after 180 minutes). Marking the build as aborted.
03:00:16.873 Build was aborted
03:00:16.874 Archiving artifacts
03:00:16.874 ERROR: Failed to archive artifacts: **/lognxserver/config/distribution.properties
03:00:16.875 hudson.remoting.ChannelClosedException: channel is already closed
03:00:16.876 at hudson.remoting.Channel.send(Channel.java:494)
03:00:16.876 at hudson.remoting.Request.call(Request.java:129)
03:00:16.876 at hudson.remoting.Channel.call(Channel.java:672)
03:00:16.876 at hudson.EnvVars.getRemote(EnvVars.java:212)
03:00:16.876 at hudson.model.Computer.getEnvironment(Computer.java:882)
03:00:16.876 at jenkins.model.CoreEnvironmentContributor.buildEnvironmentFor(CoreEnvironmentContributor.java:28)
03:00:16.876 at hudson.model.Run.getEnvironment(Run.java:2028)
03:00:16.876 at hudson.model.AbstractBuild.getEnvironment(AbstractBuild.java:927)
03:00:16.876 at hudson.tasks.ArtifactArchiver.perform(ArtifactArchiver.java:115)
03:00:16.876 at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:19)
03:00:16.876 at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:798)
03:00:16.876 at hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:770)
03:00:16.876 at hudson.model.Build$BuildExecution.post2(Build.java:183)
03:00:16.876 at hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:720)
03:00:16.876 at hudson.model.Run.execute(Run.java:1600)
03:00:16.876 at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
03:00:16.876 at hudson.model.ResourceController.execute(ResourceController.java:88)
03:00:16.876 at hudson.model.Executor.run(Executor.java:237)
03:00:16.876 Caused by: java.io.IOException
03:00:16.876 at hudson.remoting.Channel.close(Channel.java:910)
03:00:16.876 at hudson.slaves.ChannelPinger$1.onDead(ChannelPinger.java:110)
03:00:16.876 at hudson.remoting.PingThread.ping(PingThread.java:120)
03:00:16.876 at hudson.remoting.PingThread.run(PingThread.java:81)
03:00:16.876 Caused by: java.util.concurrent.TimeoutException: Ping started on 1367743028681 hasn't completed at 1367743268682
03:00:16.876 ... 2 more
happened with recent version Jenkins ver. 1.475-SNAPSHOT stack trace:
I don't understand this: