Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-23917

Protocol deadlock while uploading artifacts from ppc64

    • Icon: Bug Bug
    • Resolution: Won't Fix
    • Icon: Major Major
    • core, remoting

      I've encountered an ssh2 channel protocol issue when a ppc64 slave communicates with an x64 master.

      Most operations, like sending build logs, work fine. When the time comes to upload artifacts at the end of the build the build stalls indefinitely at:

      Archiving artifacts
      

      If I get stack dumps of slave and master using jstack, I see the master waiting to read from the slave:

      "Channel reader thread: Fedora16-ppc64-Power7-osuosl-karman" prio=10 tid=0x00000000038c2800 nid=0x6de7 in Object.wait() [0x00007f825ef8b000]
         java.lang.Thread.State: WAITING (on object monitor)
              at java.lang.Object.wait(Native Method)
              - waiting on <0x00000000bf5802e0> (a com.trilead.ssh2.channel.Channel)
              at java.lang.Object.wait(Object.java:502)
              at com.trilead.ssh2.channel.FifoBuffer.read(FifoBuffer.java:212)
              - locked <0x00000000bf5802e0> (a com.trilead.ssh2.channel.Channel)
              at com.trilead.ssh2.channel.Channel$Output.read(Channel.java:127)
              at com.trilead.ssh2.channel.ChannelManager.getChannelData(ChannelManager.java:946)
              - locked <0x00000000bf5802e0> (a com.trilead.ssh2.channel.Channel)
              at com.trilead.ssh2.channel.ChannelInputStream.read(ChannelInputStream.java:58)
              at com.trilead.ssh2.channel.ChannelInputStream.read(ChannelInputStream.java:79)
              at hudson.remoting.FlightRecorderInputStream.read(FlightRecorderInputStream.java:82)
              at hudson.remoting.ChunkedInputStream.readHeader(ChunkedInputStream.java:67)
              at hudson.remoting.ChunkedInputStream.readUntilBreak(ChunkedInputStream.java:93)
              at hudson.remoting.ChunkedCommandTransport.readBlock(ChunkedCommandTransport.java:33)
              at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:34)
              at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:48)
      

      and the slave is waiting for data from the master:

      "Channel reader thread: channel" prio=10 tid=0x00000fff940fedd0 nid=0x558e runnable [0x00000fff6dc6d000]
         java.lang.Thread.State: RUNNABLE
              at java.io.FileInputStream.readBytes(Native Method)
              at java.io.FileInputStream.read(FileInputStream.java:236)
              at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
              at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
              - locked <0x00000fff78ba9f98> (a java.io.BufferedInputStream)
              at hudson.remoting.FlightRecorderInputStream.read(FlightRecorderInputStream.java:82)
              at hudson.remoting.ChunkedInputStream.readHeader(ChunkedInputStream.java:67)
              at hudson.remoting.ChunkedInputStream.readUntilBreak(ChunkedInputStream.java:93)
              at hudson.remoting.ChunkedCommandTransport.readBlock(ChunkedCommandTransport.java:33)
              at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:34)
              at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:48)
      

      of course I can't get those dumps at exactly the same moment, even if that were meaningful with network latencies and buffering, but repeated runs never show any other state for either thread.

      tshark shows that there's some SSH chatter going on:

        0.000000 SLAVE -> MASTER SSH 126 Encrypted response packet len=60
        0.176121 MASTER -> SLAVE SSH 94 Encrypted request packet len=28
        0.176151 SLAVE -> MASTER TCP 66 ssh > 37501 [ACK] Seq=61 Ack=29 Win=707 Len=0 TSval=4141397874 TSecr=2808266826
      

      but it should well be low level ssh keepalives or similar, as it's at precise 5 second intervals with nothing much else happening. There are three master->slave ssh connections, so it's not guaranteed that it's even the one associated with the stuck channel.

      My first thought is endianness.

      I don't really know how to begin debugging this issue, though.

        1. config.xml
          1.0 kB
        2. jenkins-master-idle-stack.txt
          47 kB
        3. jenkins-master-stack.txt
          49 kB
        4. jenkins-slave-stack.txt
          6 kB
        5. slavelog-from-master.txt
          4 kB
        6. slavelog-from-slave.txt
          1 kB

          [JENKINS-23917] Protocol deadlock while uploading artifacts from ppc64

          Craig Ringer added a comment -

          I've progressively reduced the archive file size. It has got stuck with files as small as 128k.

          So far tests with 16k files haven't failed. I'm trying to narrow down whether it can occur with v.small archive files (and is just less likely) or if it's size related.

          Craig Ringer added a comment - I've progressively reduced the archive file size. It has got stuck with files as small as 128k. So far tests with 16k files haven't failed. I'm trying to narrow down whether it can occur with v.small archive files (and is just less likely) or if it's size related.

          Craig Ringer added a comment - - edited

          Just hit the same issue at a different place. Stuck at:

          Started by user Craig Ringer
          [EnvInject] - Loading node environment variables.
          

          I scheduled another build while this one was still running, the first time I've done that. It got queued as there's only one executor, but if that still triggers communication with the slave down the ssh session, maybe that's why?

          When I cancelled it, the exception was:

          Started by user Craig Ringer
          [EnvInject] - Loading node environment variables.
          ERROR: SEVERE ERROR occurs
          org.jenkinsci.lib.envinject.EnvInjectException: java.lang.InterruptedException
          	at org.jenkinsci.plugins.envinject.service.EnvironmentVariablesNodeLoader.gatherEnvironmentVariablesNode(EnvironmentVariablesNodeLoader.java:77)
          	at org.jenkinsci.plugins.envinject.EnvInjectListener.loadEnvironmentVariablesNode(EnvInjectListener.java:81)
          	at org.jenkinsci.plugins.envinject.EnvInjectListener.setUpEnvironment(EnvInjectListener.java:39)
          	at hudson.model.AbstractBuild$AbstractBuildExecution.createLauncher(AbstractBuild.java:589)
          	at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:493)
          	at hudson.model.Run.execute(Run.java:1732)
          	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
          	at hudson.model.ResourceController.execute(ResourceController.java:88)
          	at hudson.model.Executor.run(Executor.java:234)
          Caused by: java.lang.InterruptedException
          	at java.lang.Object.wait(Native Method)
          	at hudson.remoting.Request.call(Request.java:146)
          	at hudson.remoting.Channel.call(Channel.java:739)
          	at hudson.FilePath.act(FilePath.java:1011)
          	at org.jenkinsci.plugins.envinject.service.EnvironmentVariablesNodeLoader.gatherEnvironmentVariablesNode(EnvironmentVariablesNodeLoader.java:44)
          	... 8 more
          

          followed by slave agent death with:

          java.io.IOException: Unexpected termination of the channel
          	at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:50)
          Caused by: java.io.EOFException
          	at java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2323)
          	at java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:2792)
          	at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:800)
          	at java.io.ObjectInputStream.<init>(ObjectInputStream.java:298)
          	at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:40)
          	at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:34)
          	at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:48)
          

          Craig Ringer added a comment - - edited Just hit the same issue at a different place. Stuck at: Started by user Craig Ringer [EnvInject] - Loading node environment variables. I scheduled another build while this one was still running, the first time I've done that. It got queued as there's only one executor, but if that still triggers communication with the slave down the ssh session, maybe that's why? When I cancelled it, the exception was: Started by user Craig Ringer [EnvInject] - Loading node environment variables. ERROR: SEVERE ERROR occurs org.jenkinsci.lib.envinject.EnvInjectException: java.lang.InterruptedException at org.jenkinsci.plugins.envinject.service.EnvironmentVariablesNodeLoader.gatherEnvironmentVariablesNode(EnvironmentVariablesNodeLoader.java:77) at org.jenkinsci.plugins.envinject.EnvInjectListener.loadEnvironmentVariablesNode(EnvInjectListener.java:81) at org.jenkinsci.plugins.envinject.EnvInjectListener.setUpEnvironment(EnvInjectListener.java:39) at hudson.model.AbstractBuild$AbstractBuildExecution.createLauncher(AbstractBuild.java:589) at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:493) at hudson.model.Run.execute(Run.java:1732) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:234) Caused by: java.lang.InterruptedException at java.lang. Object .wait(Native Method) at hudson.remoting.Request.call(Request.java:146) at hudson.remoting.Channel.call(Channel.java:739) at hudson.FilePath.act(FilePath.java:1011) at org.jenkinsci.plugins.envinject.service.EnvironmentVariablesNodeLoader.gatherEnvironmentVariablesNode(EnvironmentVariablesNodeLoader.java:44) ... 8 more followed by slave agent death with: java.io.IOException: Unexpected termination of the channel at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:50) Caused by: java.io.EOFException at java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2323) at java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:2792) at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:800) at java.io.ObjectInputStream.<init>(ObjectInputStream.java:298) at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:40) at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:34) at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:48)

          Craig Ringer added a comment -

          A key clue is that slave.jar appears to be dying suddenly (process terminates) every couple of jobs, when running with a 16kb archive file. Unclear if this was happening before with larger archives and I wasn't noticing because jenkins was relaunching the worker.

          This seems to happen after successful job completion though.

          After reconfiguring the slave launcher with

          ulimit -c unlimited && 
          

          as prefix and

           -slaveLog "log-$(date -Iseconds).txt"
          

          then rerunning the small 16kb archive job until the slave died (3rd run), I was able to capture logs from both sides.

          I'm now going to test base64 encoded streams (in case it's an ssh 8-bit clean issue) and direct tcp/ip.

          Craig Ringer added a comment - A key clue is that slave.jar appears to be dying suddenly (process terminates) every couple of jobs, when running with a 16kb archive file. Unclear if this was happening before with larger archives and I wasn't noticing because jenkins was relaunching the worker. This seems to happen after successful job completion though. After reconfiguring the slave launcher with ulimit -c unlimited && as prefix and -slaveLog "log-$(date -Iseconds).txt" then rerunning the small 16kb archive job until the slave died (3rd run), I was able to capture logs from both sides. I'm now going to test base64 encoded streams (in case it's an ssh 8-bit clean issue) and direct tcp/ip.

          Craig Ringer added a comment -

          Attached logs from when a slave dies. slavelog-from-master is from the Node log, taken from the web ui. -from-slave is from the file on the slave machine specified with the "-slaveLog" command line param to the slave agent.

          Craig Ringer added a comment - Attached logs from when a slave dies. slavelog-from-master is from the Node log, taken from the web ui. -from-slave is from the file on the slave machine specified with the "-slaveLog" command line param to the slave agent.

          Craig Ringer added a comment -

          With -text, log shown for the node on the master when it dies:

          <===[JENKINS REMOTING CAPACITY]===><===[HUDSON TRANSMISSION BEGINS]===channel started
          Slave.jar version: 2.43
          This is a Unix slave
          Slave successfully connected and online
          Jul 23, 2014 5:57:48 AM hudson.remoting.SynchronousCommandTransport$ReaderThread run
          SEVERE: I/O error in channel channel
          java.io.StreamCorruptedException: invalid stream header: AC64736F
          	at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:800)
          	at java.io.ObjectInputStream.<init>(ObjectInputStream.java:297)
          	at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:40)
          	at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:34)
          	at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:48)
          channel stopped
          ERROR: Connection terminated
          java.io.IOException: Unexpected termination of the channel
          	at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:50)
          Caused by: java.io.EOFException
          	at java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2323)
          	at java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:2792)
          	at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:800)
          	at java.io.ObjectInputStream.<init>(ObjectInputStream.java:298)
          	at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:40)
          	at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:34)
          	at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:48)
          [07/23/14 05:57:49] [SSH] Connection closed.
          ERROR: [07/23/14 05:57:49] slave agent was terminated
          java.io.IOException: Unexpected termination of the channel
          	at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:50)
          Caused by: java.io.EOFException
          	at java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2323)
          	at java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:2792)
          	at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:800)
          	at java.io.ObjectInputStream.<init>(ObjectInputStream.java:298)
          	at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:40)
          	at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:34)
          	at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:48)
          

          and from the slave its self:

          channel startedchannel started
          
          Jul 23, 2014 5:57:48 AM hudson.remoting.SynchronousCommandTransport$ReaderThread run
          SEVERE: I/O error in channel channel
          java.io.StreamCorruptedException: invalid stream header: AC64736F
                  at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:800)
                  at java.io.ObjectInputStream.<init>(ObjectInputStream.java:297)
                  at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:40)
                  at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:34)
                  at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:48)
          Jul 23, 2014 5:57:48 AM hudson.remoting.SynchronousCommandTransport$ReaderThread run
          SEVERE: I/O error in channel channel
          java.io.StreamCorruptedException: invalid stream header: AC64736F
                  at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:800)
                  at java.io.ObjectInputStream.<init>(ObjectInputStream.java:297)
                  at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:40)
                  at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:34)
                  at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:48)
          channel stoppedchannel stopped
          

          Craig Ringer added a comment - With -text, log shown for the node on the master when it dies: <===[JENKINS REMOTING CAPACITY]===><===[HUDSON TRANSMISSION BEGINS]===channel started Slave.jar version: 2.43 This is a Unix slave Slave successfully connected and online Jul 23, 2014 5:57:48 AM hudson.remoting.SynchronousCommandTransport$ReaderThread run SEVERE: I/O error in channel channel java.io.StreamCorruptedException: invalid stream header: AC64736F at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:800) at java.io.ObjectInputStream.<init>(ObjectInputStream.java:297) at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:40) at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:34) at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:48) channel stopped ERROR: Connection terminated java.io.IOException: Unexpected termination of the channel at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:50) Caused by: java.io.EOFException at java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2323) at java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:2792) at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:800) at java.io.ObjectInputStream.<init>(ObjectInputStream.java:298) at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:40) at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:34) at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:48) [07/23/14 05:57:49] [SSH] Connection closed. ERROR: [07/23/14 05:57:49] slave agent was terminated java.io.IOException: Unexpected termination of the channel at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:50) Caused by: java.io.EOFException at java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2323) at java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:2792) at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:800) at java.io.ObjectInputStream.<init>(ObjectInputStream.java:298) at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:40) at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:34) at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:48) and from the slave its self: channel startedchannel started Jul 23, 2014 5:57:48 AM hudson.remoting.SynchronousCommandTransport$ReaderThread run SEVERE: I/O error in channel channel java.io.StreamCorruptedException: invalid stream header: AC64736F at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:800) at java.io.ObjectInputStream.<init>(ObjectInputStream.java:297) at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:40) at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:34) at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:48) Jul 23, 2014 5:57:48 AM hudson.remoting.SynchronousCommandTransport$ReaderThread run SEVERE: I/O error in channel channel java.io.StreamCorruptedException: invalid stream header: AC64736F at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:800) at java.io.ObjectInputStream.<init>(ObjectInputStream.java:297) at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:40) at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:34) at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:48) channel stoppedchannel stopped

          Craig Ringer added a comment -

          I can't for the life of me figure out how to use the -tcp option to make the slave passively accept a tcp connection from the master.

          In any case, -text still fails, so it seems unlikely to be an issue with SSH mangling binary.

          Craig Ringer added a comment - I can't for the life of me figure out how to use the -tcp option to make the slave passively accept a tcp connection from the master. In any case, -text still fails, so it seems unlikely to be an issue with SSH mangling binary.

          Craig Ringer added a comment -

          When I switch the node to JNLP (so it uses direct TCP/IP as a transport, rather than SSH) I can no longer reproduce this despite repeated test runs.

          So, so far:

          • Only observed on ppc64
          • Only observed for ssh slaves
          • Using -text protocol does not help

          Craig Ringer added a comment - When I switch the node to JNLP (so it uses direct TCP/IP as a transport, rather than SSH) I can no longer reproduce this despite repeated test runs. So, so far: Only observed on ppc64 Only observed for ssh slaves Using -text protocol does not help

          Craig Ringer added a comment -

          I've left a JNLP worker running for some hours, running the same job with 1mb and 10mb artifact sizes that caused intermittent problems over the ssh transport. No problems.

          Craig Ringer added a comment - I've left a JNLP worker running for some hours, running the same job with 1mb and 10mb artifact sizes that caused intermittent problems over the ssh transport. No problems.

          jimis jimis added a comment -

          Hi, I'm experiencing the same issue using Jenkins 1.580.1 on ppc64 running AIX 5.3. In particular I'm seeing the exact same bytes that you have posted in the attached file "slavelog-from-slave.log":

          java.io.StreamCorruptedException: invalid stream header: 009AACED

          Searching the web I found the following explanation for this sequence of bytes:

          Object stream data is preceded by a 4 byte 'magical' sequence AC ED 00 05. An ObjectInputStream will peek for this data at construction time rather than before the first read. And that's logical: one wants to be sure it is a proper stream before being too far in an application. The sequence is buffered by the ObjectOutputStream at construction time so that it is pushed on the stream at the first write. This method often leads to complexities in buffered situations or transferring via pipes or sockets. Fortunately there is a just as simple as effective solution to all these problems: Flush the ObjectOutputStream immediately after contruction!

          Looking at the similarity in the byte sequence, it looks like either an endianess issue or an off-by-one error.

          Thank you for providing the workaround, I'll set up the buildslave using JNLP.

          jimis jimis added a comment - Hi, I'm experiencing the same issue using Jenkins 1.580.1 on ppc64 running AIX 5.3. In particular I'm seeing the exact same bytes that you have posted in the attached file "slavelog-from-slave.log": java.io.StreamCorruptedException: invalid stream header: 009AACED Searching the web I found the following explanation for this sequence of bytes: Object stream data is preceded by a 4 byte 'magical' sequence AC ED 00 05. An ObjectInputStream will peek for this data at construction time rather than before the first read. And that's logical: one wants to be sure it is a proper stream before being too far in an application. The sequence is buffered by the ObjectOutputStream at construction time so that it is pushed on the stream at the first write. This method often leads to complexities in buffered situations or transferring via pipes or sockets. Fortunately there is a just as simple as effective solution to all these problems: Flush the ObjectOutputStream immediately after contruction! Looking at the similarity in the byte sequence, it looks like either an endianess issue or an off-by-one error. Thank you for providing the workaround, I'll set up the buildslave using JNLP.

          Mark Waite added a comment -

          The Jenkins platform SIG has stopped all efforts to support ppc64. If others want to reinstate those efforts, they should join the platform SIG meetings, provide hardware, and submit pull requests.

          Mark Waite added a comment - The Jenkins platform SIG has stopped all efforts to support ppc64. If others want to reinstate those efforts, they should join the platform SIG meetings, provide hardware, and submit pull requests.

            Unassigned Unassigned
            ringerc Craig Ringer
            Votes:
            2 Vote for this issue
            Watchers:
            5 Start watching this issue

              Created:
              Updated:
              Resolved: