Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-69970

Workspace is wiped on initial clone, even without "Clean before checkout" or "cleanup after checkout"

    • Icon: Bug Bug
    • Resolution: Duplicate
    • Icon: Critical Critical
    • git-plugin
    • None
    • Jenkins 2.332.2

      I am running my job inside an ECS agent. So there is nothing to clean. In the past, I removed "Clean up after checkout" and "Clean up before checkout." And my job ran inside the ECS agent without issue. Last night, my code attempted to run and it tried to clean the workspace. Nothing about the code change. Why is it trying to clean when I told it not to? What changed in the background? Can I do a "force checkout" without a clean?

       
      Obtained Jenkinsfiles/Jenkinsfile-DOD-FortifyDocker from 9999b547162b80dd2a3badb67d7cf4fb0229f60f[Pipeline] Start of Pipeline[Pipeline] nodeStill waiting to schedule task
      'ECSAgent-ecsAgent-xmw0r' is offline
      Running on ECSAgent-ecsAgent-xmw0r in /home/jenkins/workspace/eportBuilder_DOD-Fortify-MB_scqc[Pipeline] {[Pipeline] stage[Pipeline]

      { (Declarative: Checkout SCM)[Pipeline] checkoutusing credential a5b288c3-b475-4957-b8cb-1a761d6fa7a5 Cloning the remote Git repository Cloning with configured refspecs honoured and without tags Cloning repository <git repo> > /usr/bin/git init /home/jenkins/workspace/eportBuilder_DOD-Fortify-MB_scqc # timeout=10 Fetching upstream changes from <git repo> > /usr/bin/git --version # timeout=10 > git --version # 'git version 2.30.2' using GIT_ASKPASS to set credentials > /usr/bin/git fetch --no-tags --force --progress -- <git repo> +refs/heads/*:refs/remotes/origin/* # timeout=10 Avoid second fetch Checking out Revision 9999b547162b80dd2a3badb67d7cf4fb0229f60f (scqc) > /usr/bin/git config remote.origin.url <git repo> # timeout=10 > /usr/bin/git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 > /usr/bin/git config core.sparsecheckout # timeout=10 > /usr/bin/git checkout -f <hash> # timeout=10 Commit message: "remove skip" > /usr/bin/git rev-list --no-walk <hash> # timeout=10 Cleaning workspace[Pipeline] }

      [Pipeline] // stage[Pipeline] withEnv[Pipeline] {[Pipeline] withEnv[Pipeline] {[Pipeline] stage[Pipeline] { (Fortify Scan) > /usr/bin/git rev-parse --verify HEAD # timeout=10
      Resetting working tree
      > /usr/bin/git reset --hard # timeout=10
      > /usr/bin/git clean -fdx # timeout=10[Pipeline] nodeRunning on Jenkins in /home/tomcat/.jenkins/workspace/eportBuilder_DOD-Fortify-MB_scqc[Pipeline] {[Pipeline] checkoutusing credential a5b288c3-b475-4957-b8cb-1a761d6fa7a5
      Cloning the remote Git repository
      Cloning with configured refspecs honoured and without tags
      Cloning repository ...
      ERROR: Failed to clean the workspace
      jenkins.util.io.CompositeIOException: Unable to delete '/home/tomcat/.jenkins/workspace/eportBuilder_DOD-Fortify-MB_scqc'. Tried 3 times (of a maximum of 3) waiting 0.1 sec between attempts. (Discarded 778 additional exceptions)
      at jenkins.util.io.PathRemover.forceRemoveDirectoryContents(PathRemover.java:87)
      at hudson.Util.deleteContentsRecursive(Util.java:285)

          [JENKINS-69970] Workspace is wiped on initial clone, even without "Clean before checkout" or "cleanup after checkout"

          Mark Waite added a comment -

          Thanks for reporting the issue karatetd. I'm afraid this is a long-standing behavior of the git plugin that happens the first time it clones into a workspace. I believe that the original motivation for emptying the destination directory before the first clone was to prevent "dirty workspaces" from causing the initial clone to fail.

          When the failure is logged, it usually means that there was content in the directory that the agent process was not allowed to delete. On Unix systems, that might mean that a file or directory inside the workspace had permissions that prevented it from being deleted. For example, a file or directory owned by root might not be allowed to be deleted.

          The failure message only happens on initial clone of a git repository, not on later updates to the contents of the repository.

          https://github.com/MarkEWaite/jenkins-bugs/blob/JENKINS-22795/Jenkinsfile is a test that validates the bug is still open. I've confirmed that it is open. I've been unwilling to change that behavior in fear of the many users that likely depend on that odd behavior.

          Mark Waite added a comment - Thanks for reporting the issue karatetd . I'm afraid this is a long-standing behavior of the git plugin that happens the first time it clones into a workspace. I believe that the original motivation for emptying the destination directory before the first clone was to prevent "dirty workspaces" from causing the initial clone to fail. When the failure is logged, it usually means that there was content in the directory that the agent process was not allowed to delete. On Unix systems, that might mean that a file or directory inside the workspace had permissions that prevented it from being deleted. For example, a file or directory owned by root might not be allowed to be deleted. The failure message only happens on initial clone of a git repository, not on later updates to the contents of the repository. https://github.com/MarkEWaite/jenkins-bugs/blob/JENKINS-22795/Jenkinsfile is a test that validates the bug is still open. I've confirmed that it is open. I've been unwilling to change that behavior in fear of the many users that likely depend on that odd behavior.

          Mark Waite added a comment -

          Closed as duplicate of JENKINS-22795

          Mark Waite added a comment - Closed as duplicate of JENKINS-22795

            markewaite Mark Waite
            karatetd Tiffany
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

              Created:
              Updated:
              Resolved: