Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-47770

Sometimes the node() step allocates a workspace that is already in use by another executor on the same slave.

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Major Major
    • None

      I have the ability to run several of the same build concurrently on the same slave. Each build uses the node() step to access a slave and create a workspace. I have noticed that when I do this, Jenkins often assigns the same workspace directory to multiple nodes.  These node blocks might be executing in parallel as part of the same build (in a "parallel" block) – or they might belong to two discrete builds that happen to be running concurrently. Either way, I often see two node blocks executing at the same time on the same slave with the same workspace directory.

      When this happens, unexpected build results occur, or file access errors cause checkout to fail.

      The workaround I am using is to manually assign my workspaces, using the executor number to uniquely identify the workspace:

      ws ("workspace/" + env.EXECUTOR_NUMBER) { ... }

      This ensures that no two executors on the same slave will ever use the same workspace at the same time. Currently, the Jenkins node() step does not provide that guarantee for me.

      Here is an example of a pipeline script that will cause the problem (here, each node in the pipeline has 2-4 executors on it). The checkout step is done using the gitSCM plugin.

      pipeline {
          agent none
          stages {
              stage('SCM Checkout') {
                  steps {
                      node('ls620||ls623||ls638||ls629||ls722') {
                          script {
                              doCheckout()
                          }
      
                          stashSourceFiles()
                          deleteDir()
                      }
                  }
              }
              stage('Parallel Builds') {
                  steps {
                      parallel(
                          B1: {
                              node('ls620||ls623||ls638||ls629||ls722') {
                                  loadSourceFiles()
                                  performBuild(1)
                                  deleteDir()
                              }
                          },
      
                          B2: {
                              node('ls620||ls623||ls638||ls629||ls722') {
                                  loadSourceFiles()
                                  performBuild(2)
                                  deleteDir()
                              }
                          },
      
                          B3: {
                              node('ls620||ls623||ls638||ls629||ls722') {
                                  loadSourceFiles()
                                  performBuild(3)
                                  deleteDir()
                              }
                          },
      
                          B4: {
                              node('ls722') {
                                  loadSourceFiles()
                                  performBuild(4)
                                  deleteDir()
                              }
                          },
      
                          B5: {
                              node('ls620||ls623||ls638||ls629||ls722') {
                                  loadSourceFiles()
                                  performBuild(5)
                                  deleteDir()
                              }
                          },
      
                          // If any of the parallel branches fails, stop execution.
                          failFast: true
                      )
                  }
              }
          }
          post {
          }
      }
      

          [JENKINS-47770] Sometimes the node() step allocates a workspace that is already in use by another executor on the same slave.

          Jack Zylkin created issue -
          Jack Zylkin made changes -
          Summary Original: Sometimes the node() step step allocates a workspace that is already in use by another executor on the same slave. New: Sometimes the node() step allocates a workspace that is already in use by another executor on the same slave.
          Jack Zylkin made changes -
          Environment New: System Properties
          Name ↓

          Value
          awt.toolkit sun.awt.windows.WToolkit
          derby.system.home C:\Program Files (x86)\Jenkins
          executable-war C:\Program Files (x86)\Jenkins\jenkins.war
          file.encoding Cp1252
          file.encoding.pkg sun.io
          file.separator \
          hudson.lifecycle hudson.lifecycle.WindowsServiceLifecycle
          java.awt.graphicsenv sun.awt.Win32GraphicsEnvironment
          java.awt.headless true
          java.awt.printerjob sun.awt.windows.WPrinterJob
          java.class.path C:\Program Files (x86)\Jenkins\jenkins.war
          java.class.version 52.0
          java.endorsed.dirs C:\Program Files (x86)\Java\jre1.8.0_121\lib\endorsed
          java.ext.dirs C:\Program Files (x86)\Java\jre1.8.0_121\lib\ext;C:\Windows\Sun\Java\lib\ext
          java.home C:\Program Files (x86)\Java\jre1.8.0_121
          java.io.tmpdir C:\Users\qbuild\AppData\Local\Temp\
          java.library.path C:\Program Files (x86)\Java\jre1.8.0_121\bin;C:\Windows\Sun\Java\bin;C:\Windows\system32;C:\Windows;C:\ProgramData\Oracle\Java\javapath;C:\cygwin\bin;C:\cygwin\usr\local\bin;C:\unixTools\wbin;C:\Program Files (x86)\MKS\IntegrityClient\bin;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\diab\4.4b\WIN32\bin;C:\cygwin\bin;C:\cygwin\usr\local\bin;C:\cygwin\usr\local\m68\bin;C:\Python31;C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE;C:\Program Files (x86)\OpenSSH\bin;C:\Program Files\Cppcheck;C:\Program Files (x86)\Git\cmd;.
          java.net.preferIPv4Stack true
          java.runtime.name Java(TM) SE Runtime Environment
          java.runtime.version 1.8.0_121-b13
          java.specification.name Java Platform API Specification
          java.specification.vendor Oracle Corporation
          java.specification.version 1.8
          java.vendor Oracle Corporation
          java.vendor.url http://java.oracle.com/
          java.vendor.url.bug http://bugreport.sun.com/bugreport/
          java.version 1.8.0_121
          java.vm.info mixed mode
          java.vm.name Java HotSpot(TM) Client VM
          java.vm.specification.name Java Virtual Machine Specification
          java.vm.specification.vendor Oracle Corporation
          java.vm.specification.version 1.8
          java.vm.vendor Oracle Corporation
          java.vm.version 25.121-b13
          jna.loaded true
          jnidispatch.path C:\Users\qbuild\AppData\Local\Temp\jna--965778275\jna3286797607533656805.dll
          line.separator
          mail.smtp.sendpartial true
          mail.smtps.sendpartial true
          os.arch x86
          os.name Windows 7
          os.version 6.1
          path.separator ;
          sun.arch.data.model 32
          sun.awt.enableExtraMouseButtons true
          sun.boot.class.path C:\Program Files (x86)\Java\jre1.8.0_121\lib\resources.jar;C:\Program Files (x86)\Java\jre1.8.0_121\lib\rt.jar;C:\Program Files (x86)\Java\jre1.8.0_121\lib\sunrsasign.jar;C:\Program Files (x86)\Java\jre1.8.0_121\lib\jsse.jar;C:\Program Files (x86)\Java\jre1.8.0_121\lib\jce.jar;C:\Program Files (x86)\Java\jre1.8.0_121\lib\charsets.jar;C:\Program Files (x86)\Java\jre1.8.0_121\lib\jfr.jar;C:\Program Files (x86)\Java\jre1.8.0_121\classes
          sun.boot.library.path C:\Program Files (x86)\Java\jre1.8.0_121\bin
          sun.cpu.endian little
          sun.cpu.isalist pentium_pro+mmx pentium_pro pentium+mmx pentium i486 i386 i86
          sun.desktop windows
          sun.io.unicode.encoding UnicodeLittle
          sun.java.command C:\Program Files (x86)\Jenkins\jenkins.war --httpPort=8080
          sun.java.launcher SUN_STANDARD
          sun.jnu.encoding Cp1252
          sun.management.compiler HotSpot Client Compiler
          sun.os.patch.level Service Pack 1
          svnkit.http.methods Digest,Basic,NTLM,Negotiate
          svnkit.ssh2.persistent false
          user.country US
          user.dir C:\Program Files (x86)\Jenkins
          user.home C:\Users\qbuild
          user.language en
          user.name qbuild
          user.script
          user.timezone America/New_York
          user.variant
          Environment Variables
          Name ↓

          Value
          ALLUSERSPROFILE C:\ProgramData
          APPDATA C:\Users\qbuild\AppData\Roaming
          BASE C:\Program Files (x86)\Jenkins
          CommonProgramFiles C:\Program Files (x86)\Common Files
          CommonProgramFiles(x86) C:\Program Files (x86)\Common Files
          CommonProgramW6432 C:\Program Files\Common Files
          COMPUTERNAME LS737
          ComSpec C:\Windows\system32\cmd.exe
          CYGWIN tty
          FP_NO_HOST_CHECK NO
          HOMEDRIVE C:
          HOMEPATH \Users\qbuild
          JENKINS_HOME C:\Program Files (x86)\Jenkins
          LM_LICENSE_FILE 7789@diab-lic.cb.intra.lutron.com
          LOCALAPPDATA C:\Users\qbuild\AppData\Local
          LOGONSERVER \\LS1166
          MAN_CHM_INDEX C:/Program Files (x86)/MKS/IntegrityClient/etc/siman.idx;C:/Program Files (x86)/MKS/IntegrityClient/etc/imman.idx;C:/Program Files (x86)/MKS/IntegrityClient/etc/sdman.idx;C:/Program Files (x86)/MKS/IntegrityClient/etc/isman.idx
          MAN_TXT_INDEX C:/Program Files (x86)/MKS/IntegrityClient/etc/siman.idx;C:/Program Files (x86)/MKS/IntegrityClient/etc/imman.idx;C:/Program Files (x86)/MKS/IntegrityClient/etc/sdman.idx;C:/Program Files (x86)/MKS/IntegrityClient/etc/isman.idx
          NUMBER_OF_PROCESSORS 4
          OS Windows_NT
          Path C:\ProgramData\Oracle\Java\javapath;C:\cygwin\bin;C:\cygwin\usr\local\bin;C:\unixTools\wbin;C:\Program Files (x86)\MKS\IntegrityClient\bin;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\diab\4.4b\WIN32\bin;C:\cygwin\bin;C:\cygwin\usr\local\bin;C:\cygwin\usr\local\m68\bin;C:\Python31;C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE;C:\Program Files (x86)\OpenSSH\bin;C:\Program Files\Cppcheck;C:\Program Files (x86)\Git\cmd
          PATHEXT .COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC
          PROCESSOR_ARCHITECTURE x86
          PROCESSOR_ARCHITEW6432 AMD64
          PROCESSOR_IDENTIFIER Intel64 Family 6 Model 63 Stepping 2, GenuineIntel
          PROCESSOR_LEVEL 6
          PROCESSOR_REVISION 3f02
          ProgramData C:\ProgramData
          ProgramFiles C:\Program Files (x86)
          ProgramFiles(x86) C:\Program Files (x86)
          ProgramW6432 C:\Program Files
          PSModulePath C:\Windows\system32\WindowsPowerShell\v1.0\Modules\
          PUBLIC C:\Users\Public
          PythonPath C:\Python31\Lib;
          SERVICE_ID jenkins
          SystemDrive C:
          SystemRoot C:\Windows
          TEMP C:\Users\qbuild\AppData\Local\Temp
          TMP C:\Users\qbuild\AppData\Local\Temp
          UATDATA C:\Windows\CCM\UATData\D9F8C395-CAB8-491d-B8AC-179A1FE1BE77
          USERDNSDOMAIN INTRA.LUTRON.COM
          USERDOMAIN INTRA
          USERNAME qbuild
          USERPROFILE C:\Users\qbuild
          windir C:\Windows
          windows_tracing_flags 3
          windows_tracing_logfile C:\BVTBin\Tests\installpackage\csilogfile.log
          WINSW_EXECUTABLE C:\Program Files (x86)\Jenkins\jenkins.exe
          WINSW_SERVICE_ID jenkins
          Plugins
          Name ↓

          Version

          Enabled
          ace-editor 1.1 true
          active-directory 2.6 true
          analysis-core 1.92 true
          ant 1.7 true
          antisamy-markup-formatter 1.5 true
          apache-httpcomponents-client-4-api 4.5.3-2.0 true
          authentication-tokens 1.3 true
          aws-credentials 1.23 true
          aws-java-sdk 1.11.119 true
          backup 1.6.1 true
          bitbucket 1.1.5 true
          bitbucket-approve 1.0.3 true
          bitbucket-build-status-notifier 1.3.3 true
          bitbucket-oauth 0.5 true
          bitbucket-pullrequest-builder 1.4.26 true
          bouncycastle-api 2.16.2 true
          branch-api 2.0.14 true
          build-environment 1.6 true
          build-history-metrics-plugin 1.2 true
          build-monitor-plugin 1.12+build.201704111018 true
          build-name-setter 1.6.7 true
          build-pipeline-plugin 1.5.7.1 true
          build-with-parameters 1.4 true
          ccm 3.1 true
          clone-workspace-scm 0.6 true
          cloudbees-bitbucket-branch-source 2.2.4 true
          cloudbees-folder 6.2.1 true
          cobertura 1.11 true
          codesonar 2.0.5 (cf87a) true
          conditional-buildstep 1.3.6 true
          config-file-provider 2.16.4 true
          copy-to-slave 1.4.4 true
          copyartifact 1.38.1 true
          covcomplplot 1.1.1 true
          cppcheck 1.21 true
          cpptest 0.14 true
          credentials 2.1.16 true
          credentials-binding 1.13 true
          cvs 2.13 true
          delivery-pipeline-plugin 1.0.6 true
          display-url-api 2.1.0 true
          docker-commons 1.9 true
          docker-workflow 1.13 true
          downstream-buildview 1.9 true
          durable-task 1.15 true
          dynamic-search-view 0.2.2 true
          email-ext 2.60 true
          email-ext-recipients-column 1.0 true
          emailext-template 1.0 true
          envfile 1.2 true
          envinject 2.1.5 true
          envinject-api 1.3 true
          export-params 1.9 true
          external-monitor-job 1.7 true
          file-operations 1.7 true
          files-found-trigger 1.5 true
          filesystem-list-parameter-plugin 0.0.3 true
          fstrigger 0.39 true
          ftppublisher 1.2 true
          ghprb 1.39.0 true
          git 3.6.1 true
          git-changelog 1.52 true
          git-client 2.5.0 true
          git-parameter 0.8.1 true
          git-server 1.7 true
          git-tag-message 1.5 true
          gitbucket 0.8 true
          github 1.28.1 true
          github-api 1.89 true
          github-branch-source 2.2.4 true
          github-oauth 0.27 true
          github-organization-folder 1.6 true
          github-pullrequest 0.1.0-rc26 true
          graphiteIntegrator 1.2 true
          greenballs 1.15 true
          groovy 2.0 true
          groovy-remote 0.2 true
          handlebars 1.1.1 true
          http-post 1.2 true
          http_request 1.8.21 true
          hubot-steps 1.1.0 true
          icon-shim 2.0.3 true
          integrity-plugin 2.1 true
          jackson2-api 2.8.7.0 true
          javadoc 1.4 true
          JDK_Parameter_Plugin 1.0 true
          jenkins-jira-issue-updater 1.18 true
          jira 2.4.2 true
          jira-ext 0.7 true
          jira-steps 1.2.3 true
          jira-trigger 0.5.1 true
          JiraTestResultReporter 2.0.4 true
          job-exporter 0.4 true
          jobcacher 1.0 true
          jobConfigHistory 2.18 true
          jqs-monitoring 1.4 true
          jquery 1.12.4-0 true
          jquery-detached 1.2.1 true
          jsch 0.1.54.1 true
          junit 1.21 true
          kubernetes 1.1 true
          ldap 1.17 true
          ldapemail 0.8 true
          mailer 1.20 true
          mapdb-api 1.0.9.0 true
          matrix-auth 2.1 true
          matrix-project 1.12 true
          maven-plugin 3.0 true
          mercurial 2.2 true
          metrics 3.1.2.10 true
          momentjs 1.1.1 true
          multi-branch-project-plugin 0.7 true
          multiple-scms 0.6 true
          openshift-pipeline 1.0.52 true
          openshift-sync 0.1.32 true
          pam-auth 1.3 true
          parallel-test-executor 1.9 true
          Parameterized-Remote-Trigger 2.2.2 true
          parameterized-trigger 2.35.2 true
          periodicbackup 1.5 true
          pipeline-aggregator-view 1.8 true
          pipeline-build-step 2.5.1 true
          pipeline-github-lib 1.0 true
          pipeline-githubnotify-step 1.0.3 true
          pipeline-graph-analysis 1.5 true
          pipeline-input-step 2.8 true
          pipeline-milestone-step 1.3.1 true
          pipeline-model-api 1.2.2 true
          pipeline-model-declarative-agent 1.1.1 true
          pipeline-model-definition 1.2.2 true
          pipeline-model-extensions 1.2.2 true
          pipeline-multibranch-defaults 1.1 true
          pipeline-rest-api 2.9 true
          pipeline-stage-step 2.2 true
          pipeline-stage-tags-metadata 1.2.2 true
          pipeline-stage-view 2.9 true
          pipeline-utility-steps 1.5.1 true
          plain-credentials 1.4 true
          postbuild-task 1.8 true
          PrioritySorter 3.5.1 true
          python 1.3 true
          release-helper 1.3.2 true
          robot 1.6.4 true
          role-strategy 2.6.1 true
          run-condition 1.0 true
          saferestart 0.3 true
          scm-api 2.2.3 true
          script-security 1.34 true
          show-build-parameters 1.0 true
          sonar 2.6.1 true
          sonargraph-integration 2.1.1 true
          sonargraph-plugin 1.6.4 true
          ssh 2.5 true
          ssh-agent 1.15 true
          ssh-credentials 1.13 true
          ssh-slaves 1.22 true
          stash-pullrequest-builder 1.7.0 true
          stashNotifier 1.12 true
          structs 1.10 true
          subversion 2.9 true
          thinBackup 1.9 true
          throttle-concurrents 2.0.1 true
          timestamper 1.8.8 true
          token-macro 2.3 true
          translation 1.15 true
          variant 1.1 true
          violation-comments-to-stash 1.57 true
          windows-slaves 1.3.1 true
          workflow-aggregator 2.5 true
          workflow-api 2.22 true
          workflow-basic-steps 2.6 true
          workflow-cps 2.41 true
          workflow-cps-global-lib 2.9 true
          workflow-durable-task-step 2.17 true
          workflow-job 2.12.2 true
          workflow-multibranch 2.16 true
          workflow-remote-loader 1.4 true
          workflow-scm-step 2.6 true
          workflow-step-api 2.13 true
          workflow-support 2.16 true
          xunit 1.102 true
          zephyr-for-jira-test-management 1.4 true
          Jack Zylkin made changes -
          Description Original: I have the ability to run several of the same build concurrently on the same slave. Each build uses the node() step to access the slave and create a workspace. I have noticed that when I do this, Jenkins often assigns the same workspace directory to multiple nodes.  These nodes might be running as part of the same build, in a "parallel" block -- or they might belong to two discrete builds that happen to be running concurrently and happen to be using the same node.

          When this happens, unexpected build results occur, or file access errors cause checkout to fail.

          The workaround I am using is to manually assign my workspaces, using the executor number to uniquely identify the workspace:

          ` ws ("workspace/" + env.EXECUTOR_NUMBER) \{ ... } `

          This ensures that no two executors on the same slave will ever use the same workspace at the same time. Currently, Jenkins does not provide that guarantee.
          New: I have the ability to run several of the same build concurrently on the same slave. Each build uses the node() step to access the slave and create a workspace. I have noticed that when I do this, Jenkins often assigns the same workspace directory to multiple nodes.  These nodes might be running as part of the same build, in a "parallel" block – or they might belong to two discrete builds that happen to be running concurrently and happen to be using the same node.

          When this happens, unexpected build results occur, or file access errors cause checkout to fail.

          The workaround I am using is to manually assign my workspaces, using the executor number to uniquely identify the workspace:

          ws ("workspace/" + env.EXECUTOR_NUMBER) \{ ... }

          This ensures that no two executors on the same slave will ever use the same workspace at the same time. Currently, the Jenkins node() step does not provide that guarantee for me.
          Jack Zylkin made changes -
          Description Original: I have the ability to run several of the same build concurrently on the same slave. Each build uses the node() step to access the slave and create a workspace. I have noticed that when I do this, Jenkins often assigns the same workspace directory to multiple nodes.  These nodes might be running as part of the same build, in a "parallel" block – or they might belong to two discrete builds that happen to be running concurrently and happen to be using the same node.

          When this happens, unexpected build results occur, or file access errors cause checkout to fail.

          The workaround I am using is to manually assign my workspaces, using the executor number to uniquely identify the workspace:

          ws ("workspace/" + env.EXECUTOR_NUMBER) \{ ... }

          This ensures that no two executors on the same slave will ever use the same workspace at the same time. Currently, the Jenkins node() step does not provide that guarantee for me.
          New: I have the ability to run several of the same build concurrently on the same slave. Each build uses the node() step to access the slave and create a workspace. I have noticed that when I do this, Jenkins often assigns the same workspace directory to multiple nodes.  These node blocks might be executing in parallel as part of the same build (in a "parallel" block) – or they might belong to two discrete builds that happen to be running concurrently.

          When this happens, unexpected build results occur, or file access errors cause checkout to fail.

          The workaround I am using is to manually assign my workspaces, using the executor number to uniquely identify the workspace:

          ws ("workspace/" + env.EXECUTOR_NUMBER) \{ ... }

          This ensures that no two executors on the same slave will ever use the same workspace at the same time. Currently, the Jenkins node() step does not provide that guarantee for me.
          Jack Zylkin made changes -
          Description Original: I have the ability to run several of the same build concurrently on the same slave. Each build uses the node() step to access the slave and create a workspace. I have noticed that when I do this, Jenkins often assigns the same workspace directory to multiple nodes.  These node blocks might be executing in parallel as part of the same build (in a "parallel" block) – or they might belong to two discrete builds that happen to be running concurrently.

          When this happens, unexpected build results occur, or file access errors cause checkout to fail.

          The workaround I am using is to manually assign my workspaces, using the executor number to uniquely identify the workspace:

          ws ("workspace/" + env.EXECUTOR_NUMBER) \{ ... }

          This ensures that no two executors on the same slave will ever use the same workspace at the same time. Currently, the Jenkins node() step does not provide that guarantee for me.
          New: I have the ability to run several of the same build concurrently on the same slave. Each build uses the node() step to access a slave and create a workspace. I have noticed that when I do this, Jenkins often assigns the same workspace directory to multiple nodes.  These node blocks might be executing in parallel as part of the same build (in a "parallel" block) – or they might belong to two discrete builds that happen to be running concurrently. Either way, I often see two node blocks executing at the same time with the same workspace directory.

          When this happens, unexpected build results occur, or file access errors cause checkout to fail.

          The workaround I am using is to manually assign my workspaces, using the executor number to uniquely identify the workspace:

          ws ("workspace/" + env.EXECUTOR_NUMBER) \{ ... }

          This ensures that no two executors on the same slave will ever use the same workspace at the same time. Currently, the Jenkins node() step does not provide that guarantee for me.
          Jack Zylkin made changes -
          Description Original: I have the ability to run several of the same build concurrently on the same slave. Each build uses the node() step to access a slave and create a workspace. I have noticed that when I do this, Jenkins often assigns the same workspace directory to multiple nodes.  These node blocks might be executing in parallel as part of the same build (in a "parallel" block) – or they might belong to two discrete builds that happen to be running concurrently. Either way, I often see two node blocks executing at the same time with the same workspace directory.

          When this happens, unexpected build results occur, or file access errors cause checkout to fail.

          The workaround I am using is to manually assign my workspaces, using the executor number to uniquely identify the workspace:

          ws ("workspace/" + env.EXECUTOR_NUMBER) \{ ... }

          This ensures that no two executors on the same slave will ever use the same workspace at the same time. Currently, the Jenkins node() step does not provide that guarantee for me.
          New: I have the ability to run several of the same build concurrently on the same slave. Each build uses the node() step to access a slave and create a workspace. I have noticed that when I do this, Jenkins often assigns the same workspace directory to multiple nodes.  These node blocks might be executing in parallel as part of the same build (in a "parallel" block) – or they might belong to two discrete builds that happen to be running concurrently. Either way, I often see two node blocks executing at the same time on the same slave with the same workspace directory.

          When this happens, unexpected build results occur, or file access errors cause checkout to fail.

          The workaround I am using is to manually assign my workspaces, using the executor number to uniquely identify the workspace:

          ws ("workspace/" + env.EXECUTOR_NUMBER) \{ ... }

          This ensures that no two executors on the same slave will ever use the same workspace at the same time. Currently, the Jenkins node() step does not provide that guarantee for me.

          Andrew Bayer added a comment -

          Any chance you've got an example that can reproduce this, even if not every time?

          Andrew Bayer added a comment - Any chance you've got an example that can reproduce this, even if not every time?
          Jack Zylkin made changes -
          Description Original: I have the ability to run several of the same build concurrently on the same slave. Each build uses the node() step to access a slave and create a workspace. I have noticed that when I do this, Jenkins often assigns the same workspace directory to multiple nodes.  These node blocks might be executing in parallel as part of the same build (in a "parallel" block) – or they might belong to two discrete builds that happen to be running concurrently. Either way, I often see two node blocks executing at the same time on the same slave with the same workspace directory.

          When this happens, unexpected build results occur, or file access errors cause checkout to fail.

          The workaround I am using is to manually assign my workspaces, using the executor number to uniquely identify the workspace:

          ws ("workspace/" + env.EXECUTOR_NUMBER) \{ ... }

          This ensures that no two executors on the same slave will ever use the same workspace at the same time. Currently, the Jenkins node() step does not provide that guarantee for me.
          New: I have the ability to run several of the same build concurrently on the same slave. Each build uses the node() step to access a slave and create a workspace. I have noticed that when I do this, Jenkins often assigns the same workspace directory to multiple nodes.  These node blocks might be executing in parallel as part of the same build (in a "parallel" block) – or they might belong to two discrete builds that happen to be running concurrently. Either way, I often see two node blocks executing at the same time on the same slave with the same workspace directory.

          When this happens, unexpected build results occur, or file access errors cause checkout to fail.

          The workaround I am using is to manually assign my workspaces, using the executor number to uniquely identify the workspace:

          ws ("workspace/" + env.EXECUTOR_NUMBER) \{ ... }

          This ensures that no two executors on the same slave will ever use the same workspace at the same time. Currently, the Jenkins node() step does not provide that guarantee for me.

          Here is an example of a pipeline script that will cause the problem (here, each node in the pipeline has 2-4 executors on it). The checkout step is done using the gitSCM plugin.

          {{}}{{pipeline \{}}
          {{    agent none}}
          {{    stages \{}}
          {{        stage('SCM Checkout') \{}}
          {{            steps \{}}
          {{                node('ls620||ls623||ls638||ls629||ls722') \{}}
          {{                    script \{}}
          {{                        doCheckout()}}
          {{                    }}}
          {{                    }}
          {{                    stashSourceFiles()}}
          {{                    deleteDir()}}
          {{                }}}
          {{            }}}
          {{        }}}
          {{        stage('Parallel Builds') \{}}
          {{            steps \{}}
          {{                parallel(}}
          {{                    B1: \{   }}
          {{                        node ('ls620||ls623||ls638||ls629||ls722') \{    }}
          {{                                loadSourceFiles()}}
          {{                                performBuild(1) }}
          {{                                deleteDir()}}
          {{                        }}}
          {{                    },}}
          {{                }}
          {{                    B2: \{}}
          {{                        node ('ls620||ls623||ls638||ls629||ls722') \{}}
          {{                                loadSourceFiles()}}
          {{                                performBuild(2)}}
          {{                                deleteDir()                                }}
          {{                        }}}
          {{                    },}}
          {{                }}
          {{                    B3: \{}}
          {{                        node ('ls620||ls623||ls638||ls629||ls722') \{}}
          {{                                loadSourceFiles()}}
          {{                                performBuild(3)}}
          {{                                deleteDir()    }}
          {{                       }}}
          {{                    },}}
          {{                   }}
          {{                    B4: \{}}
          {{                        node ('ls722') \{}}
          {{                                loadSourceFiles()}}
          {{                                performBuild(4)}}
          {{                                deleteDir()    }}
          {{                        }}}
          {{                    },}}
          {{                   }}
          {{                    B5: \{   }}
          {{                        node ('ls620||ls623||ls638||ls629||ls722') \{}}
          {{                                loadSourceFiles()}}
          {{                                performBuild(5)}}
          {{                                deleteDir()    }}
          {{                        }}}
          {{                    },}}
          {{                }}
          {{                    // If any of the parallel branches fails, stop execution.}}
          {{                    failFast: true}}
          {{                )}}
          {{            }}}
          {{        }}}
          {{    }}}
          {{    post \{}}
          {{    }}}
          {{}}}
          Jack Zylkin made changes -
          Description Original: I have the ability to run several of the same build concurrently on the same slave. Each build uses the node() step to access a slave and create a workspace. I have noticed that when I do this, Jenkins often assigns the same workspace directory to multiple nodes.  These node blocks might be executing in parallel as part of the same build (in a "parallel" block) – or they might belong to two discrete builds that happen to be running concurrently. Either way, I often see two node blocks executing at the same time on the same slave with the same workspace directory.

          When this happens, unexpected build results occur, or file access errors cause checkout to fail.

          The workaround I am using is to manually assign my workspaces, using the executor number to uniquely identify the workspace:

          ws ("workspace/" + env.EXECUTOR_NUMBER) \{ ... }

          This ensures that no two executors on the same slave will ever use the same workspace at the same time. Currently, the Jenkins node() step does not provide that guarantee for me.

          Here is an example of a pipeline script that will cause the problem (here, each node in the pipeline has 2-4 executors on it). The checkout step is done using the gitSCM plugin.

          {{}}{{pipeline \{}}
          {{    agent none}}
          {{    stages \{}}
          {{        stage('SCM Checkout') \{}}
          {{            steps \{}}
          {{                node('ls620||ls623||ls638||ls629||ls722') \{}}
          {{                    script \{}}
          {{                        doCheckout()}}
          {{                    }}}
          {{                    }}
          {{                    stashSourceFiles()}}
          {{                    deleteDir()}}
          {{                }}}
          {{            }}}
          {{        }}}
          {{        stage('Parallel Builds') \{}}
          {{            steps \{}}
          {{                parallel(}}
          {{                    B1: \{   }}
          {{                        node ('ls620||ls623||ls638||ls629||ls722') \{    }}
          {{                                loadSourceFiles()}}
          {{                                performBuild(1) }}
          {{                                deleteDir()}}
          {{                        }}}
          {{                    },}}
          {{                }}
          {{                    B2: \{}}
          {{                        node ('ls620||ls623||ls638||ls629||ls722') \{}}
          {{                                loadSourceFiles()}}
          {{                                performBuild(2)}}
          {{                                deleteDir()                                }}
          {{                        }}}
          {{                    },}}
          {{                }}
          {{                    B3: \{}}
          {{                        node ('ls620||ls623||ls638||ls629||ls722') \{}}
          {{                                loadSourceFiles()}}
          {{                                performBuild(3)}}
          {{                                deleteDir()    }}
          {{                       }}}
          {{                    },}}
          {{                   }}
          {{                    B4: \{}}
          {{                        node ('ls722') \{}}
          {{                                loadSourceFiles()}}
          {{                                performBuild(4)}}
          {{                                deleteDir()    }}
          {{                        }}}
          {{                    },}}
          {{                   }}
          {{                    B5: \{   }}
          {{                        node ('ls620||ls623||ls638||ls629||ls722') \{}}
          {{                                loadSourceFiles()}}
          {{                                performBuild(5)}}
          {{                                deleteDir()    }}
          {{                        }}}
          {{                    },}}
          {{                }}
          {{                    // If any of the parallel branches fails, stop execution.}}
          {{                    failFast: true}}
          {{                )}}
          {{            }}}
          {{        }}}
          {{    }}}
          {{    post \{}}
          {{    }}}
          {{}}}
          New: I have the ability to run several of the same build concurrently on the same slave. Each build uses the node() step to access a slave and create a workspace. I have noticed that when I do this, Jenkins often assigns the same workspace directory to multiple nodes.  These node blocks might be executing in parallel as part of the same build (in a "parallel" block) – or they might belong to two discrete builds that happen to be running concurrently. Either way, I often see two node blocks executing at the same time on the same slave with the same workspace directory.

          When this happens, unexpected build results occur, or file access errors cause checkout to fail.

          The workaround I am using is to manually assign my workspaces, using the executor number to uniquely identify the workspace:

          ws ("workspace/" + env.EXECUTOR_NUMBER) \{ ... }

          This ensures that no two executors on the same slave will ever use the same workspace at the same time. Currently, the Jenkins node() step does not provide that guarantee for me.

          Here is an example of a pipeline script that will cause the problem (here, each node in the pipeline has 2-4 executors on it). The checkout step is done using the gitSCM plugin.

          pipeline \{
              agent none
              stages \{
                  stage('SCM Checkout') \{
                      steps \{
                          node('ls620||ls623||ls638||ls629||ls722') \{
                              script \{
                                  doCheckout()
                              }
                              
                              stashSourceFiles()
                              deleteDir()
                          }
                      }
                  }
                  stage('Parallel Builds') \{
                      steps \{
                          parallel(
                              B1: \{   
                                  node ('ls620||ls623||ls638||ls629||ls722') \{    
                                          loadSourceFiles()
                                          performBuild(1)
                                          deleteDir()
                                  }
                              },
                          
                              B2: \{
                                  node ('ls620||ls623||ls638||ls629||ls722') \{
                                          loadSourceFiles()
                                          performBuild(2)
                                          deleteDir()                                
                                  }
                              },
                          
                              B3: \{
                                  node ('ls620||ls623||ls638||ls629||ls722') \{
                                          loadSourceFiles()
                                          performBuild(3)
                                          deleteDir()    
                                 }
                              },
                             
                              B4: \{
                                  node ('ls722') \{
                                          loadSourceFiles()
                                          performBuild(4)
                                          deleteDir()    
                                  }
                              },
                             
                              B5: \{   
                                  node ('ls620||ls623||ls638||ls629||ls722') \{
                                          loadSourceFiles()
                                          performBuild(5)
                                          deleteDir()    
                                  }
                              },
                          
                              // If any of the parallel branches fails, stop execution.
                              failFast: true
                          )
                      }
                  }
              }
              post \{
              }
          }

            Unassigned Unassigned
            jzylkin Jack Zylkin
            Votes:
            2 Vote for this issue
            Watchers:
            6 Start watching this issue

              Created:
              Updated: