• 2.1.1

      Something changed in the naming of the workspace folders on the Jenkins Master node. Previously, the name the was suffixed with a long random (names are examples only, the exact length is different):

      MyProjectA1_MyBranch-ZOBMWQA2JSVUZRAPJAQ3NT3TSBIUQOS26N5CF3XJB@libs
      MyProjectA1_MyBranch-ZOBMWQA2JSVUZRAPJAQ3NT3TSBIUQOS26N5CF3XJB@script
      MyVeryLongBranchName-NHOUUZPM5MARPAN7OGASNLPNCKRLL26RQKEZIEEKC@libs
      MyVeryLongBranchName-NHOUUZPM5MARPAN7OGASNLPNCKRLL26RQKEZIEEKC@script
      MyVeryLongBranchName-TN6NSHXVGMKRKUOVV7CKNODZOK3JSHI4CQYCOR4E6@libs
      MyVeryLongBranchName-TN6NSHXVGMKRKUOVV7CKNODZOK3JSHI4CQYCOR4E6@script
      

      However now, this suffix seems to be removed, causing name clashes when different repositories have long but identical branch names:

      MyProjectA1_MyBranch@libs
      MyProjectA1_MyBranch@script
      MyVeryLongBranchName@libs    <<-\
      MyVeryLongBranchName@script     |   <<-\
      MyVeryLongBranchName@libs    <<-/      |
      MyVeryLongBranchName@script         <<-/
      

      The last two repositories are unrelated but map to the same directory, causing "unrelated repository" errors during checkout.

          [JENKINS-54640] Workspace folders are not unique

          kpop added a comment -
          Nov 21, 2018 8:23:35 AM FINE  allocating $WORKSPACE/Config1_BranchWithRatherLongName for WorkflowJob@11fe5  [Repo/ProjectA/Config1/BranchWithRatherLongName]
          Nov 21, 2018 8:33:18 AM FINER index collision on    Config1_BranchWithRatherLongName for WorkflowJob@16ea9b8[Repo/ProjectB/Config1/BranchWithRatherLongName]
          Nov 21, 2018 8:33:18 AM FINE  allocating $WORKSPACE/nfig1_BranchWithRatherLongName_2 for WorkflowJob@16ea9b8[Repo/ProjectB/Config1/BranchWithRatherLongName]
          

          The above log illustrates that collisions as correctly resolved by appending a something to the folder name to make it unique. That unique name is also correctly stored in workspaces.txt.

          kpop added a comment - Nov 21, 2018 8:23:35 AM FINE allocating $WORKSPACE/Config1_BranchWithRatherLongName for WorkflowJob@11fe5 [Repo/ProjectA/Config1/BranchWithRatherLongName] Nov 21, 2018 8:33:18 AM FINER index collision on Config1_BranchWithRatherLongName for WorkflowJob@16ea9b8[Repo/ProjectB/Config1/BranchWithRatherLongName] Nov 21, 2018 8:33:18 AM FINE allocating $WORKSPACE/nfig1_BranchWithRatherLongName_2 for WorkflowJob@16ea9b8[Repo/ProjectB/Config1/BranchWithRatherLongName] The above log illustrates that collisions as correctly resolved by appending a something to the folder name to make it unique. That unique name is also correctly stored in workspaces.txt.

          Lars-Magnus Skog added a comment - - edited

          I've noticed that workspace names sometimes get cut off. Not sure if it's known issue or not, but this issue was the closest I could find.

           

          Example. Below is a list of workspaces for `deltachat/deltachat-desktop`. As you can see, one of the workspaces doesn't start with `deltachat-desktop`, but instead `hat-desktop` or `ltachat-desktop`:

           

           

          Please advice if you want me to create a separate issue for this.

          Lars-Magnus Skog added a comment - - edited I've noticed that workspace names sometimes get cut off. Not sure if it's known issue or not, but this issue was the closest I could find.   Example. Below is a list of workspaces for `deltachat/deltachat-desktop`. As you can see, one of the workspaces doesn't start with `deltachat-desktop`, but instead `hat-desktop` or `ltachat-desktop`:     Please advice if you want me to create a separate issue for this.

          Josh Soref added a comment - - edited

          Fwiw, this fix doesn't appear to have fixed our flavor of the problem.

           

          We tried upgrading to 2.1.1 and that broke things the same way the previous thing did. We're downgrading (to 2.0.20) now.

          Josh Soref added a comment - - edited Fwiw, this fix doesn't appear to have fixed our flavor of the problem.   We tried upgrading to 2.1.1 and that broke things the same way the previous thing did. We're downgrading (to 2.0.20) now.

          kpop added a comment -

          ralphtheninja, I believe that cutting of the folder names is intentional behavior. I see the same without loss of functionality. The workspaces.txt lists the full project name and project path, as well as the folder that is allocated for it.

          jsoref, check if the workspaces.txt still contains duplicate folder names. I've cleared my entire workspace and also removed that file to get it working. But it might also be sufficient to remove all duplicates in workspaces.txt and the corresponding folders. New collisions are avoided, but I'm unsure if existing collisions are also fixed automatically.

          kpop added a comment - ralphtheninja , I believe that cutting of the folder names is intentional behavior. I see the same without loss of functionality. The workspaces.txt lists the full project name and project path, as well as the folder that is allocated for it. jsoref , check if the workspaces.txt still contains duplicate folder names. I've cleared my entire workspace and also removed that file to get it working. But it might also be sufficient to remove all duplicates in workspaces.txt and the corresponding folders. New collisions are avoided, but I'm unsure if existing collisions are also fixed automatically.

          kpop Ok. But why would it be intentional? What's the reason for doing it?

          Lars-Magnus Skog added a comment - kpop Ok. But why would it be intentional? What's the reason for doing it?

          Josh Soref added a comment -
          ERROR: Unable to launch the agent for somenode
          java.nio.file.NoSuchFileException: /home/jenkins/workspace/workspaces.txt
          	at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
          	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
          	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
          	at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
          	at java.nio.file.Files.newByteChannel(Files.java:361)
          	at java.nio.file.Files.newByteChannel(Files.java:407)
          	at java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:384)
          	at java.nio.file.Files.newInputStream(Files.java:152)
          	at hudson.FilePath$Read.invoke(FilePath.java:1991)
          	at hudson.FilePath$Read.invoke(FilePath.java:1983)
          	at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3084)
          	at hudson.remoting.UserRequest.perform(UserRequest.java:212)
          	at hudson.remoting.UserRequest.perform(UserRequest.java:54)
          	at hudson.remoting.Request$2.run(Request.java:369)
          	at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
          Caused: java.io.IOException
          	at hudson.remoting.FastPipedInputStream.read(FastPipedInputStream.java:169)
          	at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284)
          	at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326)
          	at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178)
          	at java.io.InputStreamReader.read(InputStreamReader.java:184)
          	at java.io.BufferedReader.fill(BufferedReader.java:161)
          	at java.io.BufferedReader.readLine(BufferedReader.java:324)
          	at java.io.BufferedReader.readLine(BufferedReader.java:389)
          	at jenkins.branch.WorkspaceLocatorImpl.load(WorkspaceLocatorImpl.java:221)
          	at jenkins.branch.WorkspaceLocatorImpl.access$500(WorkspaceLocatorImpl.java:80)
          	at jenkins.branch.WorkspaceLocatorImpl$Collector.onOnline(WorkspaceLocatorImpl.java:518)
          	at hudson.slaves.SlaveComputer.setChannel(SlaveComputer.java:693)
          	at hudson.slaves.SlaveComputer.setChannel(SlaveComputer.java:432)
          	at hudson.slaves.CommandLauncher.launch(CommandLauncher.java:153)
          	at hudson.slaves.SlaveComputer$1.call(SlaveComputer.java:294)
          	at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46)
          	at jenkins.security.ImpersonatingExecutorService$2.call(ImpersonatingExecutorService.java:71)
          	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
          	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
          	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
          	at java.lang.Thread.run(Thread.java:748)
          ERROR: Connection terminated
          java.io.EOFException
          	at java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2671)
          	at java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3146)
          	at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:858)
          	at java.io.ObjectInputStream.<init>(ObjectInputStream.java:354)
          	at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:49)
          	at hudson.remoting.Command.readFrom(Command.java:140)
          	at hudson.remoting.Command.readFrom(Command.java:126)
          	at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:36)
          	at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:63)
          Caused: java.io.IOException: Unexpected termination of the channel
          	at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:77)
          ERROR: Process terminated with exit code 0
           

          That's a terrible behavior.

          Josh Soref added a comment - ERROR: Unable to launch the agent for somenode java.nio.file.NoSuchFileException: /home/jenkins/workspace/workspaces.txt at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214) at java.nio.file.Files.newByteChannel(Files.java:361) at java.nio.file.Files.newByteChannel(Files.java:407) at java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:384) at java.nio.file.Files.newInputStream(Files.java:152) at hudson.FilePath$Read.invoke(FilePath.java:1991) at hudson.FilePath$Read.invoke(FilePath.java:1983) at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3084) at hudson.remoting.UserRequest.perform(UserRequest.java:212) at hudson.remoting.UserRequest.perform(UserRequest.java:54) at hudson.remoting.Request$2.run(Request.java:369) at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72) Caused: java.io.IOException at hudson.remoting.FastPipedInputStream.read(FastPipedInputStream.java:169) at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284) at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326) at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178) at java.io.InputStreamReader.read(InputStreamReader.java:184) at java.io.BufferedReader.fill(BufferedReader.java:161) at java.io.BufferedReader.readLine(BufferedReader.java:324) at java.io.BufferedReader.readLine(BufferedReader.java:389) at jenkins.branch.WorkspaceLocatorImpl.load(WorkspaceLocatorImpl.java:221) at jenkins.branch.WorkspaceLocatorImpl.access$500(WorkspaceLocatorImpl.java:80) at jenkins.branch.WorkspaceLocatorImpl$Collector.onOnline(WorkspaceLocatorImpl.java:518) at hudson.slaves.SlaveComputer.setChannel(SlaveComputer.java:693) at hudson.slaves.SlaveComputer.setChannel(SlaveComputer.java:432) at hudson.slaves.CommandLauncher.launch(CommandLauncher.java:153) at hudson.slaves.SlaveComputer$1.call(SlaveComputer.java:294) at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46) at jenkins.security.ImpersonatingExecutorService$2.call(ImpersonatingExecutorService.java:71) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang. Thread .run( Thread .java:748) ERROR: Connection terminated java.io.EOFException at java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2671) at java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3146) at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:858) at java.io.ObjectInputStream.<init>(ObjectInputStream.java:354) at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:49) at hudson.remoting.Command.readFrom(Command.java:140) at hudson.remoting.Command.readFrom(Command.java:126) at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:36) at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:63) Caused: java.io.IOException: Unexpected termination of the channel at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:77) ERROR: Process terminated with exit code 0 That's a terrible behavior.

          Josh Soref added a comment -

          So, all I've done is rename workspaces.txt I don't really want to have to go to all 10 of my computers and delete entire trees if i can avoid it...

          Josh Soref added a comment - So, all I've done is rename workspaces.txt I don't really want to have to go to all 10 of my computers and delete entire trees if i can avoid it...

          Josh Soref added a comment -

          So, I think that the plugin is not being careful about whose workspace.txt it's considering...

           

          We're using

          pipeline {
           parallel {
            stage {
            }
            stage {
            }
           }
          }

          A quick grep for our project across the various build nodes shows that the hashed directory listed is the one from one build node, but not from the failing build node (which has a different path in the workspaces.txt file).

          Josh Soref added a comment - So, I think that the plugin is not being careful about whose workspace.txt it's considering...   We're using pipeline { parallel { stage { } stage { } } } A quick grep for our project across the various build nodes shows that the hashed directory listed is the one from one build node, but not from the failing build node (which has a different path in the workspaces.txt  file).

          Jesse Glick added a comment -

          ralphtheninja yes the truncation is intentional, to limit the length of folder names, as some tools (especially, though not exclusively, on Windows) have trouble with long names.

          jsoref please open a separate bug report linked to JENKINS-2111, ideally with steps to reproduce from scratch if you can find them. Your stack trace suggests that Jenkins is trying to open a workspaces.txt which does not exist…immediately after verifying that it does exist. While I could certainly make this code more defensive in various ways, I would like to understand what is happening first.

          Jesse Glick added a comment - ralphtheninja yes the truncation is intentional, to limit the length of folder names, as some tools (especially, though not exclusively, on Windows) have trouble with long names. jsoref please open a separate bug report linked to JENKINS-2111 , ideally with steps to reproduce from scratch if you can find them. Your stack trace suggests that Jenkins is trying to open a workspaces.txt which does not exist…immediately after verifying that it does exist. While I could certainly make this code more defensive in various ways, I would like to understand what is happening first.

          Thanks everyone for feedback. Jenkins seems to be a friendly and helpful community! <3

          Lars-Magnus Skog added a comment - Thanks everyone for feedback. Jenkins seems to be a friendly and helpful community! <3

            jglick Jesse Glick
            kpop kpop
            Votes:
            1 Vote for this issue
            Watchers:
            7 Start watching this issue

              Created:
              Updated:
              Resolved: