Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-44659

Missing workspace - workspace deleted during concurrent build

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Critical Critical
    • None
    • Jenkins ver. 2.46.1 Linux Ubuntu16.04

      I'm using pipeline to automate builds and it's possible that the same pipeline is triggered concurrently.

      What's happening is I use node ('label'), which already assigned a workspace on a slave with label 'docker_build', then i use ws() to go to another directory as workspace to build.

      Here's what I am doing:

       

      node ('docker_build') {
      currentBuild.description = "${MODEL_NAME}/${BUILD_TYPE} - ${BUILD_LABEL} - Executor ${EXECUTOR_NUMBER} ${NODE_NAME}"
      def actual_workspace = "/home/devops/jenkins_slave_robot/workspace/TinderBox/Chroot_Build/${PROJECT}/${EXECUTOR_NUMBER}"
      ...
      
      stage('Checkout') {
      ws(actual_workspace) {
      ...
      }
      
      stage('Build') {jj
      sh 'whoami'
      def image = docker.image('172.16.181.203:5000/fortios1.0:1.0.0.10')
      image.inside("--privileged -u root -v ${actual_workspace}:${chroot_path}/code") {
      }

      Occasionally,  I would run into issues the missing workspace error. The build is interrupted then failedjj.  The error message looks something like this:

      ERROR: missing workspace /home/devops/jenkins_slave_robot/workspace/TinderBox/FortiOS/Build_Steps/5.4_Chroot_Build on jenkins-smoke-slave03(172.16.182.123)

      From the error message, seems like the original workspace, which was allocated when I use node('label') is disappear. I understand that when the concurrent build happens, the workspace name will come with@<number> , so I can't understand how the workspace would be gone. 

      So far it has happened twice in the last 5 months.

       

          [JENKINS-44659] Missing workspace - workspace deleted during concurrent build

          Kevin Yu created issue -
          Kevin Yu made changes -
          Summary Original: Missing workspace New: Missing workspace - workspace deleted during concurrent build
          Kevin Yu made changes -
          Description Original: I'm using pipeline to automate builds and it's possible that the same pipeline is triggered concurrently.

          What's happening is I use node ('label'), which already assigned a workspace on a slave with label 'docker_build', then i use ws() to go to another directory as workspace to build.

          Here's what I am doing:

           
          {code:java}
          node ('docker_build') {
          currentBuild.description = "${MODEL_NAME}/${BUILD_TYPE} - ${BUILD_LABEL} - Executor ${EXECUTOR_NUMBER} ${NODE_NAME}"
          def actual_workspace = "/home/devops/jenkins_slave_robot/workspace/TinderBox/Chroot_Build/${PROJECT}/${EXECUTOR_NUMBER}"
          ...

          stage('Checkout') {
          ws(actual_workspace) {
          ...
          }

          stage('Build') {
          sh 'whoami'
          def image = docker.image('172.16.181.203:5000/fortios1.0:1.0.0.10')
          image.inside("--privileged -u root -v ${actual_workspace}:${chroot_path}/code") {
          }{code}
          *Occasionally*,  I would run into issues the missing workspace error. The build is interrupted then failed.  The error message looks something like this:
          {code:java}
          ERROR: missing workspace /home/devops/jenkins_slave_robot/workspace/TinderBox/FortiOS/Build_Steps/5.4_Chroot_Build on jenkins-smoke-slave03(172.16.182.123){code}
          From the error message, seems like the original workspace, which was allocated when I use node('label') is disappear.

           
          New: I'm using pipeline to automate builds and it's possible that the same pipeline is triggered concurrently.

          What's happening is I use node ('label'), which already assigned a workspace on a slave with label 'docker_build', then i use ws() to go to another directory as workspace to build.

          Here's what I am doing:

           
          {code:java}
          node ('docker_build') {
          currentBuild.description = "${MODEL_NAME}/${BUILD_TYPE} - ${BUILD_LABEL} - Executor ${EXECUTOR_NUMBER} ${NODE_NAME}"
          def actual_workspace = "/home/devops/jenkins_slave_robot/workspace/TinderBox/Chroot_Build/${PROJECT}/${EXECUTOR_NUMBER}"
          ...

          stage('Checkout') {
          ws(actual_workspace) {
          ...
          }

          stage('Build') {
          sh 'whoami'
          def image = docker.image('172.16.181.203:5000/fortios1.0:1.0.0.10')
          image.inside("--privileged -u root -v ${actual_workspace}:${chroot_path}/code") {
          }{code}
          *Occasionally*,  I would run into issues the missing workspace error. The build is interrupted then failed.  The error message looks something like this:
          {code:java}
          ERROR: missing workspace /home/devops/jenkins_slave_robot/workspace/TinderBox/FortiOS/Build_Steps/5.4_Chroot_Build on jenkins-smoke-slave03(172.16.182.123){code}
          From the error message, seems like the original workspace, which was allocated when I use node('label') is disappear. I understand that when the concurrent build happens, the workspace name will come with@<number> , so I can't understand how the workspace would be gone. 

          So far it has happened twice in the last 5 months.

           
          Jesse Glick made changes -
          Resolution New: Not A Defect [ 7 ]
          Status Original: Open [ 1 ] New: Resolved [ 5 ]
          elhay efrat made changes -
          Assignee New: elhay efrat [ elhay ]
          elhay efrat made changes -
          Resolution Original: Not A Defect [ 7 ]
          Status Original: Resolved [ 5 ] New: Reopened [ 4 ]
          elhay efrat made changes -
          Resolution New: Fixed [ 1 ]
          Status Original: Reopened [ 4 ] New: Resolved [ 5 ]
          Jesse Glick made changes -
          Status Original: Resolved [ 5 ] New: Fixed but Unreleased [ 10203 ]
          Jesse Glick made changes -
          Resolution Original: Fixed [ 1 ]
          Status Original: Fixed but Unreleased [ 10203 ] New: Reopened [ 4 ]
          Jesse Glick made changes -
          Resolution New: Not A Defect [ 7 ]
          Status Original: Reopened [ 4 ] New: Resolved [ 5 ]
          fan made changes -
          Attachment New: image-2021-07-30-08-53-43-051.png [ 55308 ]

            elhay elhay efrat
            samsun387 Kevin Yu
            Votes:
            0 Vote for this issue
            Watchers:
            13 Start watching this issue

              Created:
              Updated: