Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-9436

gui option for "hudson.model.WorkspaceCleanupThread.disabled"

    • Icon: Improvement Improvement
    • Resolution: Won't Fix
    • Icon: Major Major
    • core
    • None

      Using a shared repository for several builds leads to corrupted builds and broken repository as cleanup job is deleting the workspace while another job is building.

      "hudson.model.WorkspaceCleanupThread.disabled" disables the cleaning but must be set with every start of jenkins. A gui option should be added and the option should be configurable in the config file.

          [JENKINS-9436] gui option for "hudson.model.WorkspaceCleanupThread.disabled"

          Andrew Wood added a comment -

          I've seen this same behaviour, old workspaces disappear without warning. It would be much more convenient to have a simple toggle checkbox in the GUI configuration; re-applying the property upon each restart is a bit error prone.

          Andrew Wood added a comment - I've seen this same behaviour, old workspaces disappear without warning. It would be much more convenient to have a simple toggle checkbox in the GUI configuration; re-applying the property upon each restart is a bit error prone.

          James Nord added a comment -

          there is no error re-applying a config at the start of every jenkins - unless you are starting jenkins manually (in which case you shouldn't need to do that).

          You can set it in the init script or sysconfig/jenkins on unixy machines and the jenkins.xml on windows

          unix
          JAVA_ARGS="-Dhudson.model.WorkspaceCleanupThread.disabled=true ...some_other_args"

          windows
          <arguments>-Xrs -Xmx256m -Dhudson.model.WorkspaceCleanupThread.disabled=true
          -Dhudson.lifecycle=hudson.lifecycle.WindowsServiceLifecycle
          -jar "%BASE%\jenkins.war" --httpPort=8080</arguments>

          Give there is a simple way to set this (and it is not something that should be set in normal use cases) I would recommend closing this issue.

          James Nord added a comment - there is no error re-applying a config at the start of every jenkins - unless you are starting jenkins manually (in which case you shouldn't need to do that). You can set it in the init script or sysconfig/jenkins on unixy machines and the jenkins.xml on windows unix JAVA_ARGS="-Dhudson.model.WorkspaceCleanupThread.disabled=true ...some_other_args" windows <arguments>-Xrs -Xmx256m -Dhudson.model.WorkspaceCleanupThread.disabled=true -Dhudson.lifecycle=hudson.lifecycle.WindowsServiceLifecycle -jar "%BASE%\jenkins.war" --httpPort=8080</arguments> Give there is a simple way to set this (and it is not something that should be set in normal use cases) I would recommend closing this issue.

          Jesse Glick added a comment -

          Agreed, we do not generally provide UI options for every system property.

          Moving the feature to a separate plugin would be fine.

          Jesse Glick added a comment - Agreed, we do not generally provide UI options for every system property. Moving the feature to a separate plugin would be fine.

          Jesse Glick added a comment -

          Noted https://trello.com/c/wTenPUgi/21-workspacecleanupthread for future reference.

          Regarding the original complaint

          cleanup job is deleting the workspace while another job is building

          this should not be the case, since the cleanup thread specifically checks whether the workspace being considered is on the same node as was last used to build this project. There may of course be bugs in this logic that result in accidental cleaning under some edge conditions, but if so that should be filed with any information to reproduce.

          Jesse Glick added a comment - Noted https://trello.com/c/wTenPUgi/21-workspacecleanupthread for future reference. Regarding the original complaint cleanup job is deleting the workspace while another job is building this should not be the case, since the cleanup thread specifically checks whether the workspace being considered is on the same node as was last used to build this project. There may of course be bugs in this logic that result in accidental cleaning under some edge conditions, but if so that should be filed with any information to reproduce.

          Andrew Barber added a comment - - edited

          This is one system property that needs a UI option. Deleting workspaces is pretty serious and warrants live control over the behavior.

          I think there is a bug in the cleanup code still. We've been battling with disappearing workspaces and I could not figure out what was happening until I stumbled upon this thread. In our case, I moved a job from one slave to a different slave, but the cleanup code seemed to think it was ok to delete based on the old slave. The message in the log was
          Deleting <dir> on <old slave name>

          We haven't been using that old slave for this job for at least a few weeks. To make matters worse, it deleted the workspace WHILE the job was running on the new slave.
          This appears to be the trouble code:

          for (Node node : nodes) {
          FilePath ws = node.getWorkspaceFor(item);
          if (ws == null)

          Unknown macro: { continue; // offline, fine }

          boolean check;
          try

          Unknown macro: { check = shouldBeDeleted(item, ws, node); }

          catch (IOException x) {

          The first node that it comes across that returns shouldBeDeleted true causes the workspace deleted even if another node (later in the list) is the last builder of that job (meaning the job is still active). This tries to get caught in shouldBeDeleted()

          if(lb!=null && lb.equals( n)) {
          // this is the active workspace. keep it.
          LOGGER.log(Level.FINE, "Directory

          Unknown macro: { 0}

          is the last workspace for

          Unknown macro: {1 }

          ", new Object[]

          Unknown macro: {dir, p}

          );
          return false;
          }

          But since the for loop code takes action before checking all nodes, this check can be pointless.

          Andrew Barber added a comment - - edited This is one system property that needs a UI option. Deleting workspaces is pretty serious and warrants live control over the behavior. I think there is a bug in the cleanup code still. We've been battling with disappearing workspaces and I could not figure out what was happening until I stumbled upon this thread. In our case, I moved a job from one slave to a different slave, but the cleanup code seemed to think it was ok to delete based on the old slave. The message in the log was Deleting <dir> on <old slave name> We haven't been using that old slave for this job for at least a few weeks. To make matters worse, it deleted the workspace WHILE the job was running on the new slave. This appears to be the trouble code: for (Node node : nodes) { FilePath ws = node.getWorkspaceFor(item); if (ws == null) Unknown macro: { continue; // offline, fine } boolean check; try Unknown macro: { check = shouldBeDeleted(item, ws, node); } catch (IOException x) { The first node that it comes across that returns shouldBeDeleted true causes the workspace deleted even if another node (later in the list) is the last builder of that job (meaning the job is still active). This tries to get caught in shouldBeDeleted() if(lb!=null && lb.equals( n)) { // this is the active workspace. keep it. LOGGER.log(Level.FINE, "Directory Unknown macro: { 0} is the last workspace for Unknown macro: {1 } ", new Object[] Unknown macro: {dir, p} ); return false; } But since the for loop code takes action before checking all nodes, this check can be pointless.

            Unassigned Unassigned
            mpater Marcel Pater
            Votes:
            3 Vote for this issue
            Watchers:
            4 Start watching this issue

              Created:
              Updated:
              Resolved: