Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-28151

if node switched off after execution some garbage generated

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Minor Minor
    • ws-cleanup-plugin
    • None

      because of specifics of our project we need reboot node just after jenkins build completed

      After updating I observe that some garbage generated if we enable workspace clean up
      e.g. I see many dirs like this
      /local/home/jenkins/fsroot/workspace/<my project>_ws-cleanup_1430299254267

      I suppose that it's connected to asynchronously deletion and deletion is not completed until node reboot

          [JENKINS-28151] if node switched off after execution some garbage generated

          We see a similar issue on Windows, the directory is renamed because of the asynchronoous delete, but we reboot the node in the next jenkins step and the asynchronous delete then isn't finished. Also because of the size of our workspace we see that the asynchronous delete together with the fresh checked out tree is causing our systems to run out of diskspace. Would be helpful for us to have an option to disable the asynchronous delete, we know our build takes more, but it saves us ton of diskspace.

          Johnny Willemsen added a comment - We see a similar issue on Windows, the directory is renamed because of the asynchronoous delete, but we reboot the node in the next jenkins step and the asynchronous delete then isn't finished. Also because of the size of our workspace we see that the asynchronous delete together with the fresh checked out tree is causing our systems to run out of diskspace. Would be helpful for us to have an option to disable the asynchronous delete, we know our build takes more, but it saves us ton of diskspace.

          Adam Brousseau added a comment - - edited

          We are also seeing this issue on Windows (cygwin) but we are not rebooting the nodes. Eventually we run out of space as well once enough of the renamed directories are created. Our only workaround at the moment is to either switch to "rm -rf" or run another build periodically that removes all the workspaces from the nodes.

          dir("${WORKSPACE}/../") {
              sh "ls | grep -v ${JOB_NAME} | xargs rm -rf"
          }

          Adam Brousseau added a comment - - edited We are also seeing this issue on Windows (cygwin) but we are not rebooting the nodes. Eventually we run out of space as well once enough of the renamed directories are created. Our only workaround at the moment is to either switch to "rm -rf" or run another build periodically that removes all the workspaces from the nodes. dir("${WORKSPACE}/../") {     sh "ls | grep -v ${JOB_NAME} | xargs rm -rf" }

            vjuranek vjuranek
            ryg_ Roman G
            Votes:
            2 Vote for this issue
            Watchers:
            5 Start watching this issue

              Created:
              Updated: