This is one system property that needs a UI option. Deleting workspaces is pretty serious and warrants live control over the behavior.
I think there is a bug in the cleanup code still. We've been battling with disappearing workspaces and I could not figure out what was happening until I stumbled upon this thread. In our case, I moved a job from one slave to a different slave, but the cleanup code seemed to think it was ok to delete based on the old slave. The message in the log was
Deleting <dir> on <old slave name>
We haven't been using that old slave for this job for at least a few weeks. To make matters worse, it deleted the workspace WHILE the job was running on the new slave.
This appears to be the trouble code:
for (Node node : nodes) {
FilePath ws = node.getWorkspaceFor(item);
if (ws == null)
Unknown macro: { continue; // offline, fine }
boolean check;
try
Unknown macro: { check = shouldBeDeleted(item, ws, node); }
catch (IOException x) {
The first node that it comes across that returns shouldBeDeleted true causes the workspace deleted even if another node (later in the list) is the last builder of that job (meaning the job is still active). This tries to get caught in shouldBeDeleted()
if(lb!=null && lb.equals( n)) {
// this is the active workspace. keep it.
LOGGER.log(Level.FINE, "Directory
Unknown macro: { 0}
is the last workspace for
Unknown macro: {1 }
", new Object[]
Unknown macro: {dir, p}
);
return false;
}
But since the for loop code takes action before checking all nodes, this check can be pointless.
I've seen this same behaviour, old workspaces disappear without warning. It would be much more convenient to have a simple toggle checkbox in the GUI configuration; re-applying the property upon each restart is a bit error prone.