Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-3580

Workspace deleted when subversion checkout happens

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Major Major
    • subversion-plugin
    • None
    • Platform: All, OS: All

      When a subversion checkout is done, the entire workspace is deleted - not just
      the previous checkout folder. There may be other files held in the workspace
      outside the checkout folder - especially when the same workspace is share by
      multiple jobs. Code should be changed so that only the previous checkout
      folder, if there is one, is deleted before a checkout is done.

          [JENKINS-3580] Workspace deleted when subversion checkout happens

          Code changed in hudson
          User: : kaxelson
          Path:
          trunk/hudson/main/core/src/main/java/hudson/scm/SubversionSCM.java
          http://fisheye4.cenqua.com/changelog/hudson/?cs=17550
          Log:
          JENKINS-3580: [FIXED JENKINS-3580]
          When a checkout occurs, only the existing checkout location(s) is/are deleted, not the entire workspace.

          SCM/JIRA link daemon added a comment - Code changed in hudson User: : kaxelson Path: trunk/hudson/main/core/src/main/java/hudson/scm/SubversionSCM.java http://fisheye4.cenqua.com/changelog/hudson/?cs=17550 Log: JENKINS-3580 : [FIXED JENKINS-3580] When a checkout occurs, only the existing checkout location(s) is/are deleted, not the entire workspace.

          Code changed in hudson
          User: : kaxelson
          Path:
          trunk/www/changelog.html
          http://fisheye4.cenqua.com/changelog/hudson/?cs=17552
          Log:
          JENKINS-3580:
          Added changelog message

          SCM/JIRA link daemon added a comment - Code changed in hudson User: : kaxelson Path: trunk/www/changelog.html http://fisheye4.cenqua.com/changelog/hudson/?cs=17552 Log: JENKINS-3580 : Added changelog message

          Within several days of a release, we have two people independently noticing this
          behavior change.

          See http://www.nabble.com/Subversion-workspace-deletion-in-1.302-td23413270.html
          and http://www.nabble.com/Issue-3580---regression-td23402321.html

          So I think we need to revert this change. kaxelson, what you can do is to write
          a plugin that subclasses SubversionSCM, and changes its behavior in a way you
          want. Your plugin can also remove SubversionSCM descriptor from the list, so
          that your Hudson has only one "Subversion" implementation.

          Would that do?

          Kohsuke Kawaguchi added a comment - Within several days of a release, we have two people independently noticing this behavior change. See http://www.nabble.com/Subversion-workspace-deletion-in-1.302-td23413270.html and http://www.nabble.com/Issue-3580---regression-td23402321.html So I think we need to revert this change. kaxelson, what you can do is to write a plugin that subclasses SubversionSCM, and changes its behavior in a way you want. Your plugin can also remove SubversionSCM descriptor from the list, so that your Hudson has only one "Subversion" implementation. Would that do?

          mdonohue added a comment -
              • Issue 3634 has been marked as a duplicate of this issue. ***

          mdonohue added a comment - Issue 3634 has been marked as a duplicate of this issue. ***

          kev009 added a comment -

          add cc

          kev009 added a comment - add cc

          kaxelson added a comment -

          My apologies to those whose builds were broken by this fix.

          We can certainly revert this change if necessary, but I would argue that the
          current behavior is more correct. There is a difference between the workspace
          and the checkout location. This may go unnoticed by those who have one job per
          workspace, but for those with multiple jobs sharing a single workspace, it
          becomes immediately evident. The SCM module is overstepping its bounds by
          killing the entire workspace. As I see it, the SCM module's scope should be
          limited to the checkout locations it controls.

          I'd suggest that a better solution would be to make the same change (keep
          workspace, wipeout checkout location only) in all SCM modules/plugins and create
          a build wrapper that wipes out the workspace only if that behaviour is
          explicitly requested.

          kaxelson added a comment - My apologies to those whose builds were broken by this fix. We can certainly revert this change if necessary, but I would argue that the current behavior is more correct. There is a difference between the workspace and the checkout location. This may go unnoticed by those who have one job per workspace, but for those with multiple jobs sharing a single workspace, it becomes immediately evident. The SCM module is overstepping its bounds by killing the entire workspace. As I see it, the SCM module's scope should be limited to the checkout locations it controls. I'd suggest that a better solution would be to make the same change (keep workspace, wipeout checkout location only) in all SCM modules/plugins and create a build wrapper that wipes out the workspace only if that behaviour is explicitly requested.

          kev009 added a comment -

          The problem is, your "fix" DOES overstep and delete the ENTIRE workspace
          directory, where previously hudson did not. I have 5 projects defined, and some
          of them have dependencies on shared libraries and such built in the other
          workspace directories.

          Example:
          /home/hudson/workspace/project1 - gets built and has shared libs
          /home/hudson/workspace/project2 - start build. Hudson 1.302 DELETES project1's
          workspace

          It seems like your intent is to leave workspace intact, project2 should only ever
          touch project2's directory from the SCM module, but that is not what is
          happening.

          Your checkin is doing the exact opposite of what it is supposed to and doing the
          equivalent of `rm -Rf /home/hudson/workspace/*`.

          kev009 added a comment - The problem is, your "fix" DOES overstep and delete the ENTIRE workspace directory, where previously hudson did not. I have 5 projects defined, and some of them have dependencies on shared libraries and such built in the other workspace directories. Example: /home/hudson/workspace/project1 - gets built and has shared libs /home/hudson/workspace/project2 - start build. Hudson 1.302 DELETES project1's workspace It seems like your intent is to leave workspace intact, project2 should only ever touch project2's directory from the SCM module, but that is not what is happening. Your checkin is doing the exact opposite of what it is supposed to and doing the equivalent of `rm -Rf /home/hudson/workspace/*`.

          kaxelson added a comment -

          The behavior described by kev009 is not consistent with the issue reported here:
          http://www.nabble.com/Subversion-workspace-deletion-in-1.302-td23413270.html

          What kev009 has described is the way hudson behaved prior to my change. The
          whole point of the change was to stop the subversion scm module from deleting
          the entire workspace and have it only delete what it had previously checkout out.

          kev009, please clarify

          kaxelson added a comment - The behavior described by kev009 is not consistent with the issue reported here: http://www.nabble.com/Subversion-workspace-deletion-in-1.302-td23413270.html What kev009 has described is the way hudson behaved prior to my change. The whole point of the change was to stop the subversion scm module from deleting the entire workspace and have it only delete what it had previously checkout out. kev009, please clarify

          kev009 added a comment -

          I'm not sure what needs clarification.

          Hudson 1.302 DELETES /home/hudson/workspace/*. The SCM module should NEVER
          descend past /home/hudson/workspace/<project name>/ for any reason.

          kev009 added a comment - I'm not sure what needs clarification. Hudson 1.302 DELETES /home/hudson/workspace/*. The SCM module should NEVER descend past /home/hudson/workspace/<project name>/ for any reason.

          kaxelson added a comment -

          The only way I can replicate the behavior described by kev009 is to specify my
          local module directory for checkout as "..". Is this what you're doing kev009?
          If you want to avoid multiple checkouts of the same code, this can be achieved
          by having your projects specify the same shared workspace and having your local
          module directories be subdirectories of this shared workspace.

          The fix for this case causes the local module directory to be deleted (the same
          directory into which the code was previously checked out) rather than the workspace.

          While hudson will still work if you specify ".." as the local module directory,
          it does cause an error message to be displayed on the configuration page. Not
          only my code, but the previous code as well, assumes that the local module
          directory is a descendant directory of the workspace.

          kaxelson added a comment - The only way I can replicate the behavior described by kev009 is to specify my local module directory for checkout as "..". Is this what you're doing kev009? If you want to avoid multiple checkouts of the same code, this can be achieved by having your projects specify the same shared workspace and having your local module directories be subdirectories of this shared workspace. The fix for this case causes the local module directory to be deleted (the same directory into which the code was previously checked out) rather than the workspace. While hudson will still work if you specify ".." as the local module directory, it does cause an error message to be displayed on the configuration page. Not only my code, but the previous code as well, assumes that the local module directory is a descendant directory of the workspace.

          kev009 added a comment -

          I am not. The projects check out to /home/hudson/<project name>/<working copies>/.

          The only thing that stands out in my setup is that /home is an NFS share, and
          two remote executors do the work on the projects in that shared /home/hudson
          directory. The projects are not shared by executors. For instance, there is a
          project32 and a project64, and each is tied to a single node.

          kev009 added a comment - I am not. The projects check out to /home/hudson/<project name>/<working copies>/. The only thing that stands out in my setup is that /home is an NFS share, and two remote executors do the work on the projects in that shared /home/hudson directory. The projects are not shared by executors. For instance, there is a project32 and a project64, and each is tied to a single node.

          mdonohue added a comment -

          I can see how this feature would be useful for a shared workspace, but Hudson
          doesn't have first class support for shared workspaces - see issue 682. Also,
          the job distribution mechanism in Hudson assumes each job owns a unique
          workspace. You can hack in a shared workspace by manually specifying a
          directory for each of the jobs that need to share, but you also need to make
          sure those jobs are always assigned to the same node. This is a pretty brittle
          setup, that issue 682 is supposed to solve more generally. This makes me think
          features designed for shared workspaces are premature to be part of core Hudson.

          mdonohue added a comment - I can see how this feature would be useful for a shared workspace, but Hudson doesn't have first class support for shared workspaces - see issue 682. Also, the job distribution mechanism in Hudson assumes each job owns a unique workspace. You can hack in a shared workspace by manually specifying a directory for each of the jobs that need to share, but you also need to make sure those jobs are always assigned to the same node. This is a pretty brittle setup, that issue 682 is supposed to solve more generally. This makes me think features designed for shared workspaces are premature to be part of core Hudson.

          Code changed in hudson
          User: : kaxelson
          Path:
          trunk/hudson/main/core/src/main/java/hudson/scm/SubversionSCM.java
          http://fisheye4.cenqua.com/changelog/hudson/?cs=17876
          Log:
          JENKINS-3580:
          Rolled back changes
          merge -r17550:17549

          SCM/JIRA link daemon added a comment - Code changed in hudson User: : kaxelson Path: trunk/hudson/main/core/src/main/java/hudson/scm/SubversionSCM.java http://fisheye4.cenqua.com/changelog/hudson/?cs=17876 Log: JENKINS-3580 : Rolled back changes merge -r17550:17549

          Code changed in hudson
          User: : kaxelson
          Path:
          trunk/www/changelog.html
          http://fisheye4.cenqua.com/changelog/hudson/?cs=17878
          Log:
          JENKINS-3580:
          Added changelog message for rollback

          SCM/JIRA link daemon added a comment - Code changed in hudson User: : kaxelson Path: trunk/www/changelog.html http://fisheye4.cenqua.com/changelog/hudson/?cs=17878 Log: JENKINS-3580 : Added changelog message for rollback

          kev009 added a comment -

          1.304 is still deleting unrelated projects in workspace while 1.300 worked fine,
          though it seemed to take longer.

          Perhaps some other commit is causing?

          The builds were 12 days old from when I had to rebuild after 1.302 nuked
          workspace. They disappeared several hours after upgrading to 1.304 from 1.300.

          kev009 added a comment - 1.304 is still deleting unrelated projects in workspace while 1.300 worked fine, though it seemed to take longer. Perhaps some other commit is causing? The builds were 12 days old from when I had to rebuild after 1.302 nuked workspace. They disappeared several hours after upgrading to 1.304 from 1.300.

          kaxelson added a comment -

          I, too, am having issues with workspaces mysteriously disappearing, even after
          reverting my changes. This must be related to another change.

          kaxelson added a comment - I, too, am having issues with workspaces mysteriously disappearing, even after reverting my changes. This must be related to another change.

          The workspace deletion is addressed in issue #3653. Targetd for 1.305.

          Kohsuke Kawaguchi added a comment - The workspace deletion is addressed in issue #3653. Targetd for 1.305.

          skaze added a comment -

          I just upgraded from 828 to 304 and now my workspaces are also deleting themselves.

          I presume its ist to do with
          https://hudson.dev.java.net/issues/show_bug.cgi?id=3580 or
          https://hudson.dev.java.net/issues/show_bug.cgi?id=3653 but the fault is so
          severe that I may have to roll back.

          My projects are using scm polling, SVN update and project security. I don't want
          my 400M workspace of libraries deleted each time it checks for updates.

          When is the fix for this planned to be release?

          skaze added a comment - I just upgraded from 828 to 304 and now my workspaces are also deleting themselves. I presume its ist to do with https://hudson.dev.java.net/issues/show_bug.cgi?id=3580 or https://hudson.dev.java.net/issues/show_bug.cgi?id=3653 but the fault is so severe that I may have to roll back. My projects are using scm polling, SVN update and project security. I don't want my 400M workspace of libraries deleted each time it checks for updates. When is the fix for this planned to be release?

          skaze added a comment -

          I have upgraded to 306 and just had exactly the same problem again, a workspace
          has been deleted with all 500Mb of its contents. Unfortunately this workspace is
          used by other projects and so all the other projects break too. I am now having
          to move this shared workspace job 'outside' of hudson so at least the other jobs
          dont break. However they are still deleting themselves and then performing long
          svn checkouts. This bug is having a big detrimental affect on development.

          Have attached sys info.

          It may be worth noting that I have had some other strange experiences since 306.

          Occasionally when saving the system config Hudson has displayed a screen saying
          not connected or some such. On another occasion Hudson decided to tell me that i
          had two instances sharing the same Hudson home directory (which was incorrect, I
          had one tomcat instance running, checked with ps and both 'hudsons' had the same
          number before the @ symbol, 'XXX'@hostname).

          I have dropped to only having one executor as i thought it maybe related to
          locks and latches but as far as i can tell the whole thing is just very
          unstable. What is the recommended way of getting back to something functional?
          And also many release is great on a project but I dont think more testing is in
          order.

          skaze added a comment - I have upgraded to 306 and just had exactly the same problem again, a workspace has been deleted with all 500Mb of its contents. Unfortunately this workspace is used by other projects and so all the other projects break too. I am now having to move this shared workspace job 'outside' of hudson so at least the other jobs dont break. However they are still deleting themselves and then performing long svn checkouts. This bug is having a big detrimental affect on development. Have attached sys info. It may be worth noting that I have had some other strange experiences since 306. Occasionally when saving the system config Hudson has displayed a screen saying not connected or some such. On another occasion Hudson decided to tell me that i had two instances sharing the same Hudson home directory (which was incorrect, I had one tomcat instance running, checked with ps and both 'hudsons' had the same number before the @ symbol, 'XXX'@hostname). I have dropped to only having one executor as i thought it maybe related to locks and latches but as far as i can tell the whole thing is just very unstable. What is the recommended way of getting back to something functional? And also many release is great on a project but I dont think more testing is in order.

          skaze added a comment -

          Created an attachment (id=706)
          System information.

          skaze added a comment - Created an attachment (id=706) System information.

          mdonohue added a comment -

          > which was incorrect, I had one tomcat instance running

          Tomcat is designed to deploy multiple webapps, and it's very likely you have
          Hudson deployed multiple times on your tomcat instance. Multiple Hudson
          instances would cause all the weird behavior you are seeing - workspace
          deletion, weird errors on committing configurations.

          I'm returning this issue to FIXED, and I suggest you look to the users mailing
          list to diagnose your multiple-instances problem.

          mdonohue added a comment - > which was incorrect, I had one tomcat instance running Tomcat is designed to deploy multiple webapps, and it's very likely you have Hudson deployed multiple times on your tomcat instance. Multiple Hudson instances would cause all the weird behavior you are seeing - workspace deletion, weird errors on committing configurations. I'm returning this issue to FIXED, and I suggest you look to the users mailing list to diagnose your multiple-instances problem.

          kev009 added a comment -

          1.305 appears to be working again... no deletions so far.

          kev009 added a comment - 1.305 appears to be working again... no deletions so far.

          tjuerge added a comment -

          As stated in [1] I'm not convinced that the current (non optional) behaviour of
          deleting EVERYTHING (including a private Maven repository) within a jobs
          workspace is the "right" approach.

          So kaxelsons fix (deleting only the subversion modules within the workspace)
          introduced in cs #17550 [2] feels more "appropriate" (at least for our private
          local Maven repository issue then the current behaviour.

          Btw. a log statement like "Deleting entire workspace..." before calling
          "Util.deleteContentsRecursive(ws);" (in comparison to "Deleting workspace
          location

          {0}

          ..." before calling "Util.deleteContentsRecursive(local);") would be
          very helpful.

          [1]
          http://www.nabble.com/Subversion-workspace-deletion-(disabled-%22Use-update%22-option)-deletes-private-Maven-repository-as-well-td23762749.html
          [2] CS#17550 - When a checkout occurs, only the existing checkout location(s)
          is/are deleted, not the entire workspace.
          http://fisheye4.atlassian.com/changelog/hudson/?cs=17550

          tjuerge added a comment - As stated in [1] I'm not convinced that the current (non optional) behaviour of deleting EVERYTHING (including a private Maven repository) within a jobs workspace is the "right" approach. So kaxelsons fix (deleting only the subversion modules within the workspace) introduced in cs #17550 [2] feels more "appropriate" (at least for our private local Maven repository issue then the current behaviour. Btw. a log statement like "Deleting entire workspace..." before calling "Util.deleteContentsRecursive(ws);" (in comparison to "Deleting workspace location {0} ..." before calling "Util.deleteContentsRecursive(local);") would be very helpful. [1] http://www.nabble.com/Subversion-workspace-deletion-(disabled-%22Use-update%22-option)-deletes-private-Maven-repository-as-well-td23762749.html [2] CS#17550 - When a checkout occurs, only the existing checkout location(s) is/are deleted, not the entire workspace. http://fisheye4.atlassian.com/changelog/hudson/?cs=17550

          salimfadhley added a comment -

          Vote to keep this issue open:

          I have a use-case where I NEED the workspace to be clean before every job. I'm
          actually testing an installer program, so I really need to see what gets left
          behind after the installer does it's thing. Since I do not use Maven or Ant I
          have not clean method to tidy up for me.

          Can I suggest that everybody would be happy if the checkbox were replaced by a
          three-way option:

          • Use Update
          • Delete & Checkout only the SVN repositories
          • Delete the entire Workspace and Checkout SVN again.

          salimfadhley added a comment - Vote to keep this issue open: I have a use-case where I NEED the workspace to be clean before every job. I'm actually testing an installer program, so I really need to see what gets left behind after the installer does it's thing. Since I do not use Maven or Ant I have not clean method to tidy up for me. Can I suggest that everybody would be happy if the checkbox were replaced by a three-way option: Use Update Delete & Checkout only the SVN repositories Delete the entire Workspace and Checkout SVN again.

          kaxelson added a comment -

          I think it makes sense to have an option to delete the workspace, but I don't
          think this should be part of the scm configuration. It is really a separate thing.

          I would suggest that the change I originally applied for this case be reapplied
          along with corresponding changes for all other scm modules and that a new option
          to use a clean workspace for every build be added at the job level.

          As a refresher, my original change made it so that only the previous checkout
          location (and not the entire workspace) was deleted when a checkout was required.

          kaxelson added a comment - I think it makes sense to have an option to delete the workspace, but I don't think this should be part of the scm configuration. It is really a separate thing. I would suggest that the change I originally applied for this case be reapplied along with corresponding changes for all other scm modules and that a new option to use a clean workspace for every build be added at the job level. As a refresher, my original change made it so that only the previous checkout location (and not the entire workspace) was deleted when a checkout was required.

          tjuerge added a comment -

          I second kaxelsons suggestion to reapply his change [1] along with corresponding
          changes for all other scm modules and provide a new option
          to use a clean workspace for every build be added at the job level.

          Wiping out the entire workspace (including a private Maven repository) shouldn't
          be at an SCMs mercy. Instead this should be a concern of the corresponding job.

          Regarding keeping a private Maven repository from being deleted [2] by storing
          it outside the jobs workspace (by using the workspaces parent folder) will only
          work for the master (here the workspace is a sub-folder named "workspace" within
          every job folder). On slaves there's no sub-folder within the jobs folder (here
          are all jobs folders placed in a folder named "workspace").

          On the other hand if someone wants a private Maven repository to be wiped out
          before a job is run then we would need an additional configuration option as well.

          [1] http://fisheye4.cenqua.com/changelog/hudson/?cs=17550
          [2]
          http://www.nabble.com/Subversion-workspace-deletion-(disabled-%22Use-update%22-option)-deletes-private-Maven-repository-as-well-td23762749.html

          tjuerge added a comment - I second kaxelsons suggestion to reapply his change [1] along with corresponding changes for all other scm modules and provide a new option to use a clean workspace for every build be added at the job level. Wiping out the entire workspace (including a private Maven repository) shouldn't be at an SCMs mercy. Instead this should be a concern of the corresponding job. Regarding keeping a private Maven repository from being deleted [2] by storing it outside the jobs workspace (by using the workspaces parent folder) will only work for the master (here the workspace is a sub-folder named "workspace" within every job folder). On slaves there's no sub-folder within the jobs folder (here are all jobs folders placed in a folder named "workspace"). On the other hand if someone wants a private Maven repository to be wiped out before a job is run then we would need an additional configuration option as well. [1] http://fisheye4.cenqua.com/changelog/hudson/?cs=17550 [2] http://www.nabble.com/Subversion-workspace-deletion-(disabled-%22Use-update%22-option)-deletes-private-Maven-repository-as-well-td23762749.html

          zixenator added a comment -

          I am also finding it disconcerting that Hudson is deleting my entire
          workspace. We are using a custom workspace containing shared modules and
          several projects. One project builds the shared components, and the others
          expect to find them in the custom workspace. Of course, when these other
          projects run, Hudson deletes the entire workspace before doing the check-out.
          This deletes all the files built in the shared project that ran earlier, which
          causes the build to fail.

          To make sure I wasn't missing something, I did this simple test:

          1. Create a project named A. The only thing it does is check out a small
          folder from an SVN repository to folder A in a custom workspace (c:\workspace).
          2. Create a duplicate project B that does the same thing, but checks it out to
          folder B in the shared workspace (c:\workspace).
          1. Run project A. It says c:\workspace\A does not exist and checks out the
          files to folder A. Fine.
          2. Run project B. It says c:\workspace\B does not exist, then proceeds to
          delete the entire workspace (including folder A). It then checks out the files
          to folder B.
          3. Run project A again. It says c:\workspace\A does not exist and once again
          deletes the entire workspace (including folder B)
          4. And so on. Each project deletes the other project's files and nothing ever
          lasts long enough to actually accomplish anything.

          As a user, I would really like to see an option in the project config for
          automatically deleting the entire workspace before checkout. Of course, for me
          I would leave this hypothetical option turned OFF.

          zixenator added a comment - I am also finding it disconcerting that Hudson is deleting my entire workspace. We are using a custom workspace containing shared modules and several projects. One project builds the shared components, and the others expect to find them in the custom workspace. Of course, when these other projects run, Hudson deletes the entire workspace before doing the check-out. This deletes all the files built in the shared project that ran earlier, which causes the build to fail. To make sure I wasn't missing something, I did this simple test: 1. Create a project named A. The only thing it does is check out a small folder from an SVN repository to folder A in a custom workspace (c:\workspace). 2. Create a duplicate project B that does the same thing, but checks it out to folder B in the shared workspace (c:\workspace). 1. Run project A. It says c:\workspace\A does not exist and checks out the files to folder A. Fine. 2. Run project B. It says c:\workspace\B does not exist, then proceeds to delete the entire workspace (including folder A). It then checks out the files to folder B. 3. Run project A again. It says c:\workspace\A does not exist and once again deletes the entire workspace (including folder B) 4. And so on. Each project deletes the other project's files and nothing ever lasts long enough to actually accomplish anything. As a user, I would really like to see an option in the project config for automatically deleting the entire workspace before checkout. Of course, for me I would leave this hypothetical option turned OFF.

          zixenator added a comment -

          BTW - the previous experience was with build 1.310

          zixenator added a comment - BTW - the previous experience was with build 1.310

          kaxelson added a comment -

          Once 3966 goes live, I'll reapply this fix

          kaxelson added a comment - Once 3966 goes live, I'll reapply this fix

          Code changed in hudson
          User: : kaxelson
          Path:
          trunk/hudson/plugins/subversion/src/main/java/hudson/scm/SubversionSCM.java
          http://fisheye4.cenqua.com/changelog/hudson/?cs=19375
          Log:
          [FIXED JENKINS-3580] fixed workspace deletion issue on subversion checkout

          SCM/JIRA link daemon added a comment - Code changed in hudson User: : kaxelson Path: trunk/hudson/plugins/subversion/src/main/java/hudson/scm/SubversionSCM.java http://fisheye4.cenqua.com/changelog/hudson/?cs=19375 Log: [FIXED JENKINS-3580] fixed workspace deletion issue on subversion checkout

          Code changed in hudson
          User: : kaxelson
          Path:
          trunk/www/changelog.html
          http://fisheye4.cenqua.com/changelog/hudson/?cs=19376
          Log:
          JENKINS-3580: adding changelog message

          SCM/JIRA link daemon added a comment - Code changed in hudson User: : kaxelson Path: trunk/www/changelog.html http://fisheye4.cenqua.com/changelog/hudson/?cs=19376 Log: JENKINS-3580 : adding changelog message

          mdonohue added a comment -

          This changes the behavior of existing Hudson installations - when a user
          upgrades, their workspace will no longer be cleaned if they have the fresh
          checkout option selected for SVN. The functionality associated with the UI
          feature has already flipped and flopped once. Now it's going to flip again.
          This is not a pleasant user experience.

          3966 is nice, but doing all this change at once is surprising for users - I
          think the dev list should be notified of the plan here.

          mdonohue added a comment - This changes the behavior of existing Hudson installations - when a user upgrades, their workspace will no longer be cleaned if they have the fresh checkout option selected for SVN. The functionality associated with the UI feature has already flipped and flopped once. Now it's going to flip again. This is not a pleasant user experience. 3966 is nice, but doing all this change at once is surprising for users - I think the dev list should be notified of the plan here.

          Rolled back from 1.315 release in rev.19544. See
          http://www.nabble.com/attention-all-subversion-users-td24335693.html for the
          discussion.

          Kohsuke Kawaguchi added a comment - Rolled back from 1.315 release in rev.19544. See http://www.nabble.com/attention-all-subversion-users-td24335693.html for the discussion.

          zixenator added a comment -

          I was burned by this again today (using Hudson 1.340). I had to add a new job to Hudson to build a C++ app in the same workspace as 8 other applications (they share a lot of subsystems). When Hudson ran the job, it tried to check out the source from the SVN repository. Here is what it wrote in the log:

          Started by upstream project "Subsystems CI" build number 584
          Checking out a fresh workspace because C:\Hudson\GS CI\Grapher\trunk doesn't exist
          

          It is true that the folder didn't exist - I just added the project and expected Hudson to check it out to the workspace. What I didn't expect was for Hudson to delete the ENTIRE workspace because of this. That's right - it deletes all files for all projects in the shared workspace. This is truly unexpected and unfriendly. It takes hours to check out all that source from the repository. Then I need to manually check out the source for the new folder outside of Hudson. Then Hudson will work. <sigh>

          All in all, Hudson is a great product and the Hudson developers have my sincere thanks. But this one issue has caused me many hours of grief. Please reconsider re-implementing kaxelson's changes. He was on the right track with this - at least for our use case. And if this causes problems for other users, perhaps you can add a configuration option:

          [X] Never delete shared workspace

          zixenator added a comment - I was burned by this again today (using Hudson 1.340). I had to add a new job to Hudson to build a C++ app in the same workspace as 8 other applications (they share a lot of subsystems). When Hudson ran the job, it tried to check out the source from the SVN repository. Here is what it wrote in the log: Started by upstream project "Subsystems CI" build number 584 Checking out a fresh workspace because C:\Hudson\GS CI\Grapher\trunk doesn't exist It is true that the folder didn't exist - I just added the project and expected Hudson to check it out to the workspace. What I didn't expect was for Hudson to delete the ENTIRE workspace because of this. That's right - it deletes all files for all projects in the shared workspace. This is truly unexpected and unfriendly. It takes hours to check out all that source from the repository. Then I need to manually check out the source for the new folder outside of Hudson. Then Hudson will work. <sigh> All in all, Hudson is a great product and the Hudson developers have my sincere thanks. But this one issue has caused me many hours of grief. Please reconsider re-implementing kaxelson's changes. He was on the right track with this - at least for our use case. And if this causes problems for other users, perhaps you can add a configuration option: [X] Never delete shared workspace

          Rosen Diankov added a comment -

          I second with zixenator, when the workspace gets deleted like this, it makes it very difficult to do advanced configurations. I had my checkout option as "svn update as much as possible", but it still deleted the entire workspace when it couldn't find the initial check out. I never even asked for a fresh check out...

          A "Never delete entire workspace" check box would be great to have. There's also another option:

          Currently there's 4 different check-out strategies:

          • svn update as much as possible
          • emulate clean checkout
          • check out fresh copy
          • svn update + svn revert

          Maybe you guys could add a

          • svn update as much as possible, don't delete workspace

          Rosen Diankov added a comment - I second with zixenator, when the workspace gets deleted like this, it makes it very difficult to do advanced configurations. I had my checkout option as "svn update as much as possible", but it still deleted the entire workspace when it couldn't find the initial check out. I never even asked for a fresh check out... A "Never delete entire workspace" check box would be great to have. There's also another option: Currently there's 4 different check-out strategies: svn update as much as possible emulate clean checkout check out fresh copy svn update + svn revert Maybe you guys could add a svn update as much as possible, don't delete workspace

          kutzi added a comment -

          The link to the discussion on the dev list from Kohsuke's comment is dead. This is the new one:
          http://jenkins.361315.n4.nabble.com/attention-all-subversion-users-td393646.html

          kutzi added a comment - The link to the discussion on the dev list from Kohsuke's comment is dead. This is the new one: http://jenkins.361315.n4.nabble.com/attention-all-subversion-users-td393646.html

          We're seeing random deletions of our workspace even without any interaction with SVN. We have a few jobs where we turned off SCM polling and we've seen the workspace mysteriously go bye-bye for those.

          Can anyone explain what might be the cause?

          Daniel Kirkdorffer added a comment - We're seeing random deletions of our workspace even without any interaction with SVN. We have a few jobs where we turned off SCM polling and we've seen the workspace mysteriously go bye-bye for those. Can anyone explain what might be the cause?

          michael elder added a comment -

          I'm constantly snapping our build machine because of this problem. Since the project we are working on has multiple versions, it is more convenient to keep them in a single, custom workspace dir. If for some reason SVN locks one of the project dirs, the next time it updates the entire workspace is rm -rf'ed

          Is there a plugin or a way to sidestep this issue until it is resolved?

          michael elder added a comment - I'm constantly snapping our build machine because of this problem. Since the project we are working on has multiple versions, it is more convenient to keep them in a single, custom workspace dir. If for some reason SVN locks one of the project dirs, the next time it updates the entire workspace is rm -rf'ed Is there a plugin or a way to sidestep this issue until it is resolved?

          Alexandru Gheorghe added a comment - - edited

          Still there with subversion-plugin 2.5, our configuration is pulling many folders under one directory (as it would resemble when the library is built) and it should look like this (assuming we're in $WORKSPACE of the job):

          .
          ├── COPYING
          ├── COPYING.LESSER
          ├── coverage.xml
          ├── src
          │   ├── commands
          │   ├── core
          │   ├── __init__.py
          │   ├── ...
          │   └── webshell
          ├── core_tests
          │   ├── ...
          │   ├── __init__.py
          │   └── test_version.py
          │
          ├──flake8.log
          └── ...
          

          We fetch different folders and put them all in src/ which then we export to PYTHONPATH when we run the unittests from core_tests (with nosetests).

          The problem is that at each fetch from SVN even if we don't want and we specify this to SVN to only update we get the workspace src/ folder erased!

          Is it possible to avoid this? A snippet of the fetch:

          Switching to https://.../core at revision '2015-03-06T12:05:11.289 +0100'
          D         __init__.py
          A         tests
          ...
          

          now comes the next checkout

          Switching to https://.../lib at revision '2015-03-06T12:05:11.289 +0100'
          D         core
          D         data
          D         datatypes
          A         commands
          

          as you can see the "D" will erase previous fetches afterwards adding the new ones. Is this the intended behavior?





          Later addition: after a few more builds it seems it "fixes by itself" and concludes the build successful without any importation failures of any libraries (caused by the then deleted folders), not sure how correct this is how and needs an analysis from our part, but is this normal?

          Alexandru Gheorghe added a comment - - edited Still there with subversion-plugin 2.5, our configuration is pulling many folders under one directory (as it would resemble when the library is built) and it should look like this (assuming we're in $WORKSPACE of the job): . ├── COPYING ├── COPYING.LESSER ├── coverage.xml ├── src │   ├── commands │ ├── core │   ├── __init__.py │ ├── ... │   └── webshell ├── core_tests │ ├── ... │   ├── __init__.py │   └── test_version.py │ ├──flake8.log └── ... We fetch different folders and put them all in src/ which then we export to PYTHONPATH when we run the unittests from core_tests (with nosetests). The problem is that at each fetch from SVN even if we don't want and we specify this to SVN to only update we get the workspace src/ folder erased! Is it possible to avoid this? A snippet of the fetch: Switching to https://.../core at revision '2015-03-06T12:05:11.289 +0100' D __init__.py A tests ... now comes the next checkout Switching to https://.../lib at revision '2015-03-06T12:05:11.289 +0100' D core D data D datatypes A commands as you can see the "D" will erase previous fetches afterwards adding the new ones. Is this the intended behavior? Later addition: after a few more builds it seems it "fixes by itself" and concludes the build successful without any importation failures of any libraries (caused by the then deleted folders), not sure how correct this is how and needs an analysis from our part, but is this normal?

          Daniel Beck added a comment -

          alghe:
          In my experience the modules need to be ordered by increasing root folder depth, e.g. my first module checks out to `.`, and subsequent ones to `foo`, `bar`, etc. – maybe something similar helps in your situation.

          Note that this issue is ancient and probably should not be revived. File new bugs, or (better if you're not sure it's a bug) ask the jenkinsci-users mailing list.

          Daniel Beck added a comment - alghe : In my experience the modules need to be ordered by increasing root folder depth, e.g. my first module checks out to `.`, and subsequent ones to `foo`, `bar`, etc. – maybe something similar helps in your situation. Note that this issue is ancient and probably should not be revived. File new bugs, or (better if you're not sure it's a bug) ask the jenkinsci-users mailing list.

          Vasili Galka added a comment -

          alghe: I agree with danielbeck, what you described does not seem to be related to the initial point of this issue. Also, I don't fully understand what behaviour you expected. Can you please provide a manual list of svn commands that produce your desired behaviour? Then we can analyze how it differs from what Jenkins does.

          Beside alghe comment from 2015, can anyone please explain what is this issue about and why is it still opened? Does the initial problem still exist? I tried digging all the above history, but I fail to find a clear scenario description defining the problem.

          Vasili Galka added a comment - alghe : I agree with danielbeck , what you described does not seem to be related to the initial point of this issue. Also, I don't fully understand what behaviour you expected. Can you please provide a manual list of svn commands that produce your desired behaviour? Then we can analyze how it differs from what Jenkins does. Beside alghe comment from 2015, can anyone please explain what is this issue about and why is it still opened? Does the initial problem still exist? I tried digging all the above history, but I fail to find a clear scenario description defining the problem.

            kaxelson kaxelson
            kaxelson kaxelson
            Votes:
            5 Vote for this issue
            Watchers:
            12 Start watching this issue

              Created:
              Updated:
              Resolved: