Also, I'd like to recommend an alternative to the "delay and wait before retrying" strategy... While this one works most of the time, it's not entirely fool-proof as you can only hope that NTFS will release that lock within the timeframe of your delays/retries.
Generally what serves me best on NTFS systems is to NOT delete large folders (at first), but instead to rename/move them to a different location (where they can be deleted by a batch job). And possibly to recreate the desired folder.
I actually do this for my maven local repository and most of my development checkouts on my development machine. I have a custom alias that moves things to a temp folder instead of deleting them, and a cron job that regularly deletes that folder. This way you have no lock on the folder you're currently working on.
Jenkins could very well use a similar approach by moving the data to be disposed of to the Windows temp folder, or to a trash folder of its own choosing to be regularly emptied by an internal task.
This approach has multiple advantages:
- solves the locking for sure,
- no garbage collection required,
- no artificial delay required,
- and actually the "delete" operation is now perceived to be considerably faster (as it doesn't really happen, and move operations are close to instantaneous on most file systems).
Of course it means that at a given time, a lengthy and possibly intensive deletion process will occur in the background, but depending on how you implement it this could be scheduled to be done during periods of inactivity, or according to a planned schedule, or only when running out of disk space, etc...
Just my 2 cents, but considering that it's not atypical for Jenkins to deal with large folders, it would seem like an good approach for a number of scenarios (new/clean workspaces, deleting build records, deleting jobs, etc...).