OK, I think I understand what's going on here.
In Jenkins, the process I/O goes the opposite direction — we provide OutputStream that the process writes its stdout to, as opposed to drain the stream via read. And it doesn't close the stream that it writes to when the process has closed its stdout to indicate EOF. So the plugin like perforce that needs to read from stdout gets no EOF signal in the stream.
The only way out in the current design is to close the output stream after joining the process, which is what you do in HudsonPipedOutputStream. But this only works if the data has been indeed written to the output stream before the join successfully returns.
This is the case for local execution, as Proc.join() internally waits for the stream pumping threads to complete, but for remote execution, this isn't a guarantee enough — it merely means that the last bits of data has left the remote end to start its journey, but it still involves additional steps before the data actually gets written to locally exported OutputStream, hence the race condition.
So the short term fix is to ensure that RemoteProc.join() does make sure that all the data arrived locally before returning. The long term proper fix is to allow code to read directly from stdout/stderr as InputStream, thereby eliminating this pseudo-EOF business.