Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-24824

Asynchronous cleanup not removing renamed workspace directories on slaves

    • Icon: Bug Bug
    • Resolution: Fixed
    • Icon: Major Major
    • ws-cleanup-plugin
    • None
    • Jenkins 1.579
      ws-cleanup 0.24

      After upgrading to ws-cleanup 0.24 in order to get the asynchronous cleanup function we noticed the workspaces on our slaves getting renamed to the form of ${WORKSPACE}ws-cleanup${TIMESTAMP}. (ie, job1 would become job1_ws-cleanup_1411197183394). The expected behavior under ws-cleanup 0.24 is that these were temporary to support asynchronous processing and would be deleted. However, these directories never get removed from the slave. Over time, the slave hard drives filled up resulting in build failures.

          [JENKINS-24824] Asynchronous cleanup not removing renamed workspace directories on slaves

          Tom Moore created issue -

          vjuranek added a comment -

          Hi, any idea how to reproduce it? Everything works fine on my machine. Do you use any "Advanced" options? If so, which one? What is the OS of the slave where ws is not deleted?
          Thanks

          vjuranek added a comment - Hi, any idea how to reproduce it? Everything works fine on my machine. Do you use any "Advanced" options? If so, which one? What is the OS of the slave where ws is not deleted? Thanks

          Tom Moore added a comment -

          The slaves are running on a VM that is running Windows 7 Enterprise, Service Pack 1. The slaves start their Jenkins connection via slave command line. Running as a Windows service is not an option as the build process must be subordinate to an active login session due to compiler restrictions. No advanced options were set.

          Tom Moore added a comment - The slaves are running on a VM that is running Windows 7 Enterprise, Service Pack 1. The slaves start their Jenkins connection via slave command line. Running as a Windows service is not an option as the build process must be subordinate to an active login session due to compiler restrictions. No advanced options were set.

          We are experiencing the same problem. In our case the renamed workspaces are not removed when we use Multiple SCMs to checkout more than one repository from git. The unremoved repositories are placed in a sub-directory.

          Katrine Skovbo added a comment - We are experiencing the same problem. In our case the renamed workspaces are not removed when we use Multiple SCMs to checkout more than one repository from git. The unremoved repositories are placed in a sub-directory.

          vjuranek added a comment -

          Unfortunately I hasn't been able reproduce it yet Any reproducer would definitely help. Will try with MultipleSCM and several git repos.

          vjuranek added a comment - Unfortunately I hasn't been able reproduce it yet Any reproducer would definitely help. Will try with MultipleSCM and several git repos.

          Marton Sebok added a comment -

          Hi, I have the same issue. I think one key point is that my slave is configured to go offline after being idle for a while, and I want to delete pretty much files from a complex directory tree. I can see that some folders have been deleted from the workspace, but heavier ones are left there.

          Marton Sebok added a comment - Hi, I have the same issue. I think one key point is that my slave is configured to go offline after being idle for a while, and I want to delete pretty much files from a complex directory tree. I can see that some folders have been deleted from the workspace, but heavier ones are left there.

          Tom Moore added a comment -

          perhaps workspace size is a factor? The workspaces we saw this on come in at just over 6 GB. (Which is why we wanted to be able to use this feature in the first place)

          Tom Moore added a comment - perhaps workspace size is a factor? The workspaces we saw this on come in at just over 6 GB. (Which is why we wanted to be able to use this feature in the first place)

          vjuranek added a comment -

          Sorry, cannot reproduce even with MutilpeSCM plugin (Jenkins 1.574, MutipleSCM 0.3, Git 2.2.7, ws clenaup 0.24).
          @Marton Sebok haven't tried it yet, but this seem to be a valid concern. Will investigate it deeper. Thanks for a pointer!

          vjuranek added a comment - Sorry, cannot reproduce even with MutilpeSCM plugin (Jenkins 1.574, MutipleSCM 0.3, Git 2.2.7, ws clenaup 0.24). @Marton Sebok haven't tried it yet, but this seem to be a valid concern. Will investigate it deeper. Thanks for a pointer!

          vjuranek added a comment -

          @Tom Moore and do you also put slaves offline as pointer out by Marton Sebok? In this case it does make sense for me, but otherwise size of the workspace shouldn't IMHO matter. Do you see any errors in the logs?

          vjuranek added a comment - @Tom Moore and do you also put slaves offline as pointer out by Marton Sebok? In this case it does make sense for me, but otherwise size of the workspace shouldn't IMHO matter. Do you see any errors in the logs?

          Tom Moore added a comment -

          No, we don't put the slaves offline. We didn't notice any errors in the logs, the first indication we had that this was a problem was when we ran out of disk space.

          Tom Moore added a comment - No, we don't put the slaves offline. We didn't notice any errors in the logs, the first indication we had that this was a problem was when we ran out of disk space.

            olivergondza Oliver Gondža
            tmoore Tom Moore
            Votes:
            18 Vote for this issue
            Watchers:
            40 Start watching this issue

              Created:
              Updated:
              Resolved: