Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-44659

Missing workspace - workspace deleted during concurrent build

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Critical Critical
    • None
    • Jenkins ver. 2.46.1 Linux Ubuntu16.04

      I'm using pipeline to automate builds and it's possible that the same pipeline is triggered concurrently.

      What's happening is I use node ('label'), which already assigned a workspace on a slave with label 'docker_build', then i use ws() to go to another directory as workspace to build.

      Here's what I am doing:

       

      node ('docker_build') {
      currentBuild.description = "${MODEL_NAME}/${BUILD_TYPE} - ${BUILD_LABEL} - Executor ${EXECUTOR_NUMBER} ${NODE_NAME}"
      def actual_workspace = "/home/devops/jenkins_slave_robot/workspace/TinderBox/Chroot_Build/${PROJECT}/${EXECUTOR_NUMBER}"
      ...
      
      stage('Checkout') {
      ws(actual_workspace) {
      ...
      }
      
      stage('Build') {jj
      sh 'whoami'
      def image = docker.image('172.16.181.203:5000/fortios1.0:1.0.0.10')
      image.inside("--privileged -u root -v ${actual_workspace}:${chroot_path}/code") {
      }

      Occasionally,  I would run into issues the missing workspace error. The build is interrupted then failedjj.  The error message looks something like this:

      ERROR: missing workspace /home/devops/jenkins_slave_robot/workspace/TinderBox/FortiOS/Build_Steps/5.4_Chroot_Build on jenkins-smoke-slave03(172.16.182.123)

      From the error message, seems like the original workspace, which was allocated when I use node('label') is disappear. I understand that when the concurrent build happens, the workspace name will come with@<number> , so I can't understand how the workspace would be gone. 

      So far it has happened twice in the last 5 months.

       

          [JENKINS-44659] Missing workspace - workspace deleted during concurrent build

          Kevin Yu created issue -
          Kevin Yu made changes -
          Summary Original: Missing workspace New: Missing workspace - workspace deleted during concurrent build
          Kevin Yu made changes -
          Description Original: I'm using pipeline to automate builds and it's possible that the same pipeline is triggered concurrently.

          What's happening is I use node ('label'), which already assigned a workspace on a slave with label 'docker_build', then i use ws() to go to another directory as workspace to build.

          Here's what I am doing:

           
          {code:java}
          node ('docker_build') {
          currentBuild.description = "${MODEL_NAME}/${BUILD_TYPE} - ${BUILD_LABEL} - Executor ${EXECUTOR_NUMBER} ${NODE_NAME}"
          def actual_workspace = "/home/devops/jenkins_slave_robot/workspace/TinderBox/Chroot_Build/${PROJECT}/${EXECUTOR_NUMBER}"
          ...

          stage('Checkout') {
          ws(actual_workspace) {
          ...
          }

          stage('Build') {
          sh 'whoami'
          def image = docker.image('172.16.181.203:5000/fortios1.0:1.0.0.10')
          image.inside("--privileged -u root -v ${actual_workspace}:${chroot_path}/code") {
          }{code}
          *Occasionally*,  I would run into issues the missing workspace error. The build is interrupted then failed.  The error message looks something like this:
          {code:java}
          ERROR: missing workspace /home/devops/jenkins_slave_robot/workspace/TinderBox/FortiOS/Build_Steps/5.4_Chroot_Build on jenkins-smoke-slave03(172.16.182.123){code}
          From the error message, seems like the original workspace, which was allocated when I use node('label') is disappear.

           
          New: I'm using pipeline to automate builds and it's possible that the same pipeline is triggered concurrently.

          What's happening is I use node ('label'), which already assigned a workspace on a slave with label 'docker_build', then i use ws() to go to another directory as workspace to build.

          Here's what I am doing:

           
          {code:java}
          node ('docker_build') {
          currentBuild.description = "${MODEL_NAME}/${BUILD_TYPE} - ${BUILD_LABEL} - Executor ${EXECUTOR_NUMBER} ${NODE_NAME}"
          def actual_workspace = "/home/devops/jenkins_slave_robot/workspace/TinderBox/Chroot_Build/${PROJECT}/${EXECUTOR_NUMBER}"
          ...

          stage('Checkout') {
          ws(actual_workspace) {
          ...
          }

          stage('Build') {
          sh 'whoami'
          def image = docker.image('172.16.181.203:5000/fortios1.0:1.0.0.10')
          image.inside("--privileged -u root -v ${actual_workspace}:${chroot_path}/code") {
          }{code}
          *Occasionally*,  I would run into issues the missing workspace error. The build is interrupted then failed.  The error message looks something like this:
          {code:java}
          ERROR: missing workspace /home/devops/jenkins_slave_robot/workspace/TinderBox/FortiOS/Build_Steps/5.4_Chroot_Build on jenkins-smoke-slave03(172.16.182.123){code}
          From the error message, seems like the original workspace, which was allocated when I use node('label') is disappear. I understand that when the concurrent build happens, the workspace name will come with@<number> , so I can't understand how the workspace would be gone. 

          So far it has happened twice in the last 5 months.

           

          Oleg Nenashev added a comment -

          CC jglick

          From the Jenkins core perspective it seems to be a valid behavior. Once you stop using the workspace, it can be unallocated by Jenkins at any moment

          Oleg Nenashev added a comment - CC jglick From the Jenkins core perspective it seems to be a valid behavior. Once you stop using the workspace, it can be unallocated by Jenkins at any moment

          Kevin Yu added a comment -

          oleg_nenashev Thanks for the reply. I could understand that, but is there a way to allow user to specify the workspace when using the label? Or does it have to let node() assign a default workspace first before the ws()?

          Kevin Yu added a comment - oleg_nenashev  Thanks for the reply. I could understand that, but is there a way to allow user to specify the workspace when using the label? Or does it have to let node() assign a default workspace first before the ws()?

          Oleg Nenashev added a comment -

          samsun387 Hard to say. Probably you could implement this case by using External Workspace Manager instead of the ws() step. See https://github.com/jenkinsci/external-workspace-manager-plugin/

          Oleg Nenashev added a comment - samsun387 Hard to say. Probably you could implement this case by using External Workspace Manager instead of the ws() step. See https://github.com/jenkinsci/external-workspace-manager-plugin/

          Jesse Glick added a comment -

          Sounds like questions for the users’ list, not a valid bug.

          Jesse Glick added a comment - Sounds like questions for the users’ list, not a valid bug.
          Jesse Glick made changes -
          Resolution New: Not A Defect [ 7 ]
          Status Original: Open [ 1 ] New: Resolved [ 5 ]

          Kevin Yu added a comment -

          Just to report back - I think it's still an issue...Instead of using ws, i used dir() now to switch out to a different directory. However, occasionally I still run into this issue:

          The error is

          missing workspace /home/devops/jenkins_slave_robot/workspace/TinderBox/FortiOS/Build_Steps/5.6_Chroot_Build on jenkins-smoke-slave03(192.168.100.94)

          Kevin Yu added a comment - Just to report back - I think it's still an issue...Instead of using ws, i used dir() now to switch out to a different directory. However, occasionally I still run into this issue: The error is missing workspace /home/devops/jenkins_slave_robot/workspace/TinderBox/FortiOS/Build_Steps/5.6_Chroot_Build on jenkins-smoke-slave03(192.168.100.94)

          Peter Wiebe added a comment -

          samsun387 - were you able to resolve this problem? I am currently running in to this in a Jenkins setup inside of Kubernetes and am pulling my hair out trying to figure out why this is happening.

          Peter Wiebe added a comment - samsun387 - were you able to resolve this problem? I am currently running in to this in a Jenkins setup inside of Kubernetes and am pulling my hair out trying to figure out why this is happening.

            elhay elhay efrat
            samsun387 Kevin Yu
            Votes:
            0 Vote for this issue
            Watchers:
            13 Start watching this issue

              Created:
              Updated: