Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-15156

Builds disappear from build history after completion

    • Icon: Bug Bug
    • Resolution: Fixed
    • Icon: Blocker Blocker
    • core
    • Jenkins 1.477.2
      Master and Slaves Windows Server 2008 r2
      (Also on Jenkins 1.488 Windows Server 2008)

      We have recently noticed builds disappearing from the "Build History" listing on the project page. Developer was watching a build, waiting for it to complete and said it disappeared after it finished. Nothing was noted in any of the logs concerning that build.
      The data was still present on the disk and doing a reload from disk brought the build back. We have other automated jobs that deploy these builds based on build number, so it is pretty big issue in our environment.
      We are not able to reproduce at this point, but I still wanted to document what was happening.
      I have seen other JIRA issues that look similar, but in those jobs were disappearing after a restart, or upgrade. That is not the case for us. The build disappears after completion, success or failure.

          [JENKINS-15156] Builds disappear from build history after completion

          J White added a comment -

          I had this problem but after upgrading to Jenkins 1.486 all was back to normal

          J White added a comment - I had this problem but after upgrading to Jenkins 1.486 all was back to normal

          I have the same problem on Jenkins ver. 1.488. Master is Windows server 2008. History showed correctly after upgrading to 1.488, but disappeared after next build, so now the history is empty. The build is visible during job execution but disappears after completion.

          Tomas Hellberg added a comment - I have the same problem on Jenkins ver. 1.488. Master is Windows server 2008. History showed correctly after upgrading to 1.488, but disappeared after next build, so now the history is empty. The build is visible during job execution but disappears after completion.

          In my case, this might be related to JENKINS-13536, which causes loading of history to fail if the temp file store by the file upload plugin is removed.

          Tomas Hellberg added a comment - In my case, this might be related to JENKINS-13536 , which causes loading of history to fail if the temp file store by the file upload plugin is removed.

          My job keeps losing builds from the history all the time. An hour ago there where ten builds in the history, and now only the two most recent builds (out of a total of 175 builds) are shown.

          All the build data is still available on disk, and the history will show correctly again for a while following a restart of the master.

          Tomas Hellberg added a comment - My job keeps losing builds from the history all the time. An hour ago there where ten builds in the history, and now only the two most recent builds (out of a total of 175 builds) are shown. All the build data is still available on disk, and the history will show correctly again for a while following a restart of the master.

          12 hours later the history is back. This time without a master restart.

          Tomas Hellberg added a comment - 12 hours later the history is back. This time without a master restart.

          D C added a comment -

          I have the same problem (1.485). Builds disappear from the history. Often only the most recent build is visible.

          Attempting to access the URL of the missing builds manually (ie. append /51 to the job URL) produces a 404 Status Error (sorry, don't have exact text).

          Confirmed that "Reload configuration from disk" fixes the problem - full build history is again visible, and the build URLs can be accessed manually too.

          D C added a comment - I have the same problem (1.485). Builds disappear from the history. Often only the most recent build is visible. Attempting to access the URL of the missing builds manually (ie. append /51 to the job URL) produces a 404 Status Error (sorry, don't have exact text). Confirmed that "Reload configuration from disk" fixes the problem - full build history is again visible, and the build URLs can be accessed manually too.

          We have the same issue with 1.489. Restarting or reloading config from disk fixes this temporarily. Build history gets lost eventually after the next build. We have master/slave setup with 3 slaves executing jobs (all machines on Linux SLES)

          Christian Köberl added a comment - We have the same issue with 1.489. Restarting or reloading config from disk fixes this temporarily. Build history gets lost eventually after the next build. We have master/slave setup with 3 slaves executing jobs (all machines on Linux SLES)

          Cameron Horn added a comment -

          Notice this in particular with newly created jobs, where all build history will vanish after a build or two.

          Cameron Horn added a comment - Notice this in particular with newly created jobs, where all build history will vanish after a build or two.

          Anton Haglund added a comment -

          I have seen this in 1.486 as well. I renamed a couple of jobs, and after that all history for those jobs disappeared.

          Anton Haglund added a comment - I have seen this in 1.486 as well. I renamed a couple of jobs, and after that all history for those jobs disappeared.

          same experience here. I renamed a couple of jobs and the build history disappeared. Restarting the server solves the issue

          Franck Derunes added a comment - same experience here. I renamed a couple of jobs and the build history disappeared. Restarting the server solves the issue

          Those who are able to reproduce the bug, are you running a standalone Jenkins instance or do you have slaves configured?

          Johno Crawford added a comment - Those who are able to reproduce the bug, are you running a standalone Jenkins instance or do you have slaves configured?

          Tatiana Kalpina added a comment - - edited

          I reproduced the issue when Job was renamed and was run on slave (was not checked on master)

          Tatiana Kalpina added a comment - - edited I reproduced the issue when Job was renamed and was run on slave (was not checked on master)

          Anton Haglund added a comment -

          johno: I have several build slaves configured in my setup.

          Anton Haglund added a comment - johno: I have several build slaves configured in my setup.

          This seems to happen only with master/slave (multiple nodes) setup.

          We have a second Jenkins (both 1.489) with nearly the same configuration - the only difference is that this one has no slaves. There the problem does not occur.

          Christian Köberl added a comment - This seems to happen only with master/slave (multiple nodes) setup. We have a second Jenkins (both 1.489) with nearly the same configuration - the only difference is that this one has no slaves. There the problem does not occur.

          Nicolas Anzalone added a comment - - edited

          I just started using Jenkins about a month ago, and it was happening with some frequency on stand-alone server (no master/slave setup). I haven't seen the problem in the last 2 weeks however. As per D C's comment, reloading from disk resolves the issue whenever I see it.

          edit: just happened again a few minutes ago. may have been caused by renaming. renamed the job in question to make way for a better implementation of the same job, but I've done the same thing to a few others without seeing this.

          Nicolas Anzalone added a comment - - edited I just started using Jenkins about a month ago, and it was happening with some frequency on stand-alone server (no master/slave setup). I haven't seen the problem in the last 2 weeks however. As per D C's comment, reloading from disk resolves the issue whenever I see it. edit: just happened again a few minutes ago. may have been caused by renaming. renamed the job in question to make way for a better implementation of the same job, but I've done the same thing to a few others without seeing this.

          Tyler Ohlsen added a comment - - edited

          This started happening for us after we upgraded to 1.494. Reloading configuration from file fixes temporarily. We have a Master/Slave setup.
          Edit: This started happening, and continues to happen after every build, just after I renamed my project.

          Tyler Ohlsen added a comment - - edited This started happening for us after we upgraded to 1.494. Reloading configuration from file fixes temporarily. We have a Master/Slave setup. Edit: This started happening, and continues to happen after every build, just after I renamed my project.

          Yong Guo added a comment - - edited

          I have this problem after upgrading to 1.494. Master/Slave setup (all Linux box). Only the job create after upgrading has this problem. Those jobs created before upgrading work well.

          (updated) I find all the builds exists in the build directory. The problem is that they are not shown in the build history.

          Yong Guo added a comment - - edited I have this problem after upgrading to 1.494. Master/Slave setup (all Linux box). Only the job create after upgrading has this problem. Those jobs created before upgrading work well. (updated) I find all the builds exists in the build directory. The problem is that they are not shown in the build history.

          sogabe added a comment -

          changed priority to Critical.

          sogabe added a comment - changed priority to Critical.

          Yong Guo added a comment -

          (updated) I just upgrade to 1.495, then the history goes back!

          Yong Guo added a comment - (updated) I just upgrade to 1.495, then the history goes back!

          damian dixon added a comment -

          This is happening with 1.494.
          Reloading the configuration brings back the build logs.

          damian dixon added a comment - This is happening with 1.494. Reloading the configuration brings back the build logs.

          Larry Cai added a comment -

          I have this problem in 1.495 as well, master/slave mode

          Larry Cai added a comment - I have this problem in 1.495 as well, master/slave mode

          Reloading history isn't really a solution. Since CI is not for humans (why we need CI then?) but for automation. And this bug makes plugins like "Copy artifact" to fail. All this results in lots of false build failures. Since people cant really do anything about these build failures they tend to start just to ignore "crazy Jenkins". And this ruins whole concept.

          Nikolay Martynov added a comment - Reloading history isn't really a solution. Since CI is not for humans (why we need CI then?) but for automation. And this bug makes plugins like "Copy artifact" to fail. All this results in lots of false build failures. Since people cant really do anything about these build failures they tend to start just to ignore "crazy Jenkins". And this ruins whole concept.

          Martin Wiklundh added a comment - - edited

          I made a copy of a broken project and the copy seems to work.
          Edit: Renaming the copy broke it, but renaming back again gave the history back.

          Martin Wiklundh added a comment - - edited I made a copy of a broken project and the copy seems to work. Edit: Renaming the copy broke it, but renaming back again gave the history back.

          jchatham added a comment -

          For reference, we're seeing this issue under 1.486, on a Maven job running exclusively on the master. Other apparently identically set up Maven jobs (also running on the master) don't exhibit the same problem, however.

          As a possibly-related issue, the jobs where builds disappear have also shown just build status vanishing as well - several early builds had test failures, and every now and then those builds turn up as blue (before eventually vanishing entirely); in both cases, reloading configuration from disk temporarily resolves the issue.

          jchatham added a comment - For reference, we're seeing this issue under 1.486, on a Maven job running exclusively on the master. Other apparently identically set up Maven jobs (also running on the master) don't exhibit the same problem, however. As a possibly-related issue, the jobs where builds disappear have also shown just build status vanishing as well - several early builds had test failures, and every now and then those builds turn up as blue (before eventually vanishing entirely); in both cases, reloading configuration from disk temporarily resolves the issue.

          Peter Loron added a comment -

          Also seeing this on 1.498. Linux master. Win2K8 slave.

          Peter Loron added a comment - Also seeing this on 1.498. Linux master. Win2K8 slave.

          Same issue on 1.493. Main Jenkins on Windows Server 2008 R2 with 1 child node with same OS. Reload Configuration from Disk fixed the issue at least for now. I also noticed that only the builds running on the child node lost their history, the builds on the master were OK.

          Ivaylo Bratoev added a comment - Same issue on 1.493. Main Jenkins on Windows Server 2008 R2 with 1 child node with same OS. Reload Configuration from Disk fixed the issue at least for now. I also noticed that only the builds running on the child node lost their history, the builds on the master were OK.

          Joel Johnson added a comment -

          We have the same problem with 1.499. It happens very frequently with the individual matrix builds. It happens so fast that we run a job, click on the console for the section of the matrix, and IM it to someone else, and by the time they click the link, it's gone.

          I don't undertand why this is still an issue, and the issue hasn't even received as much of a comment from someone. This is a huge problem. As a previous commenter stated, it makes people who use Jenkins blame Jenkins for more and more problems. I'm trying my best at the office to keep Jenkins from becoming the scapegoat, but it's bugs like this that make people trust it less and less.

          Ubuntu Server 12.04 - Tomcat7 Container.

          Joel Johnson added a comment - We have the same problem with 1.499. It happens very frequently with the individual matrix builds. It happens so fast that we run a job, click on the console for the section of the matrix, and IM it to someone else, and by the time they click the link, it's gone. I don't undertand why this is still an issue, and the issue hasn't even received as much of a comment from someone. This is a huge problem. As a previous commenter stated, it makes people who use Jenkins blame Jenkins for more and more problems. I'm trying my best at the office to keep Jenkins from becoming the scapegoat, but it's bugs like this that make people trust it less and less. Ubuntu Server 12.04 - Tomcat7 Container.

          We experience similar issue as joelj mentioned. For regular jobs build history is kept much longer while for matrix jobs it tends to disappear randomly, even right after build completes. Sometimes there's a link for the build in history on master but not on slaves, sometimes it's gone from both nodes, sometimes there's a link but trying to access it returns 404.
          Even reloading configuration makes no difference here. Running on Windows Server 2008 x64, Tomcat7.
          Is there a way to raise priority on this issue? It is a real blocker especially in terms of keeping tests historical data - we get an email that something failed but it's extremely hard to trace down what was it if there's no history.

          Andrzej Pasterczyk added a comment - We experience similar issue as joelj mentioned. For regular jobs build history is kept much longer while for matrix jobs it tends to disappear randomly, even right after build completes. Sometimes there's a link for the build in history on master but not on slaves, sometimes it's gone from both nodes, sometimes there's a link but trying to access it returns 404. Even reloading configuration makes no difference here. Running on Windows Server 2008 x64, Tomcat7. Is there a way to raise priority on this issue? It is a real blocker especially in terms of keeping tests historical data - we get an email that something failed but it's extremely hard to trace down what was it if there's no history.

          Rearranged links a bit as this issue was marked as duplicate of newer issue that didn't get as much attention...

          Nikolay Martynov added a comment - Rearranged links a bit as this issue was marked as duplicate of newer issue that didn't get as much attention...

          We're using 1.492. Suddenly, builds #1 to #9 disappeared. After a day or two we decided to launch a build using a different user account. Now the builds do show up, but only starting at #10. I agree with making this bug a blocker.

          Martin d'Anjou added a comment - We're using 1.492. Suddenly, builds #1 to #9 disappeared. After a day or two we decided to launch a build using a different user account. Now the builds do show up, but only starting at #10. I agree with making this bug a blocker.

          Does anyone know version where this bug wasnt introduced yet?

          Nikolay Martynov added a comment - Does anyone know version where this bug wasnt introduced yet?

          Henri Gomez added a comment - - edited

          Same problem here, with 1.498 (powered by Tomcat 7.0.35) under openSUSE Linux.
          I noticed this problem appears for jobs built on agents.

          Henri Gomez added a comment - - edited Same problem here, with 1.498 (powered by Tomcat 7.0.35) under openSUSE Linux. I noticed this problem appears for jobs built on agents.

          Joel Johnson added a comment -

          @Nickolay Martinov: It's been happening to us ever since we upgraded to the Lazy Loading patch (1.485).

          Joel Johnson added a comment - @Nickolay Martinov: It's been happening to us ever since we upgraded to the Lazy Loading patch (1.485).

          I'm wondering if this is due to the failure to re-load of an individual build history in AbstractLazyLoadRunMap.java.

          Specifically the search(int, Direction) method does a binary search looking for a specific build. When it finds a match it may have to load() the build record from disk. If this fails then it "silently" removes the build that it tried to load and carries on.

              R r = load(idOnDisk.get(pivot), null);
              if (r==null) {
                  // this ID isn't valid. get rid of that and retry pivot
                  hi--;
                  if (!clonedIdOnDisk) {// if we are making an edit, we need to own a copy
                      idOnDisk = new SortedList<String>(idOnDisk);
                      clonedIdOnDisk = true;
                  }
                  idOnDisk.remove(pivot);
                  continue;
              }
          

          Assuming that the failure to load is a (unknown) transient error then that would cause the build to disappear but it would be re-loaded when jenkins is restarted and the on-disk records scanned again.

          I haven't seen this failure mode so I'm not able to test directly. If someone is able/willing to take a debug build of the latest jenkins I'll try to find time to add some additional debug logging to see if we can prove/disprove this theory.

          Richard Mortimer added a comment - I'm wondering if this is due to the failure to re-load of an individual build history in AbstractLazyLoadRunMap.java. Specifically the search(int, Direction) method does a binary search looking for a specific build. When it finds a match it may have to load() the build record from disk. If this fails then it "silently" removes the build that it tried to load and carries on. R r = load(idOnDisk.get(pivot), null ); if (r== null ) { // this ID isn't valid. get rid of that and retry pivot hi--; if (!clonedIdOnDisk) { // if we are making an edit, we need to own a copy idOnDisk = new SortedList< String >(idOnDisk); clonedIdOnDisk = true ; } idOnDisk.remove(pivot); continue ; } Assuming that the failure to load is a (unknown) transient error then that would cause the build to disappear but it would be re-loaded when jenkins is restarted and the on-disk records scanned again. I haven't seen this failure mode so I'm not able to test directly. If someone is able/willing to take a debug build of the latest jenkins I'll try to find time to add some additional debug logging to see if we can prove/disprove this theory.

          Nathan Neulinger added a comment - - edited

          Same here with 1.499, reload from disk consistently clears it, but hardly an option for automated build chains. Our setup is client/server with 5 slaves.

          Nathan Neulinger added a comment - - edited Same here with 1.499, reload from disk consistently clears it, but hardly an option for automated build chains. Our setup is client/server with 5 slaves.

          Same problem on 1.498 version (Linux) - very annoying bug.

          Jurgen Van Bouchaute added a comment - Same problem on 1.498 version (Linux) - very annoying bug.

          Sorry Jesse, I'm assigning this to you so that some more core devs are aware of this one...

          Dominik Bartholdi added a comment - Sorry Jesse, I'm assigning this to you so that some more core devs are aware of this one...

          Jesse Glick added a comment -

          Does anyone have any clue how to reproduce this from scratch?

          There are intimations that this is a regression from lazy loading (JENKINS-8754), yet this was originally filed against an older Jenkins version, so it is possible there are two or more unrelated bugs being lumped together here.

          Jesse Glick added a comment - Does anyone have any clue how to reproduce this from scratch? There are intimations that this is a regression from lazy loading ( JENKINS-8754 ), yet this was originally filed against an older Jenkins version, so it is possible there are two or more unrelated bugs being lumped together here.

          I am using 1.493, and I can repro this just by renaming a job.
          I am suspicious this could happen also by changing anything in the job config (but this would have to be verified)

          I fix that just by restarting Jenkins http://IP:8080/restart
          The history is back with all the jobs.

          Franck Derunes added a comment - I am using 1.493, and I can repro this just by renaming a job. I am suspicious this could happen also by changing anything in the job config (but this would have to be verified) I fix that just by restarting Jenkins http://IP:8080/restart The history is back with all the jobs.

          For us it's just random since we do not (and cant) rename jobs. No particular conditions were noted. Right now it's ok but with couple of builds on any jobs history starts disappearing; reload configuration from disk and builds are back. Since copy artifacts plugin affected, i believe this is not a UI problem. We extensively use matrix projects and ssh slaves with matrix tie parent plugin (dont know if this matters). Downgraded back to 1.463 and this problem disappeared.

          Nikolay Martynov added a comment - For us it's just random since we do not (and cant) rename jobs. No particular conditions were noted. Right now it's ok but with couple of builds on any jobs history starts disappearing; reload configuration from disk and builds are back. Since copy artifacts plugin affected, i believe this is not a UI problem. We extensively use matrix projects and ssh slaves with matrix tie parent plugin (dont know if this matters). Downgraded back to 1.463 and this problem disappeared.

          aeschbacher added a comment -

          Same main usage for us:

          • heavy use of "copy artifacts" plugin,
          • matrix projects (with "Matrix tie parent" plugin)
          • ssh slaves
            At the beginning, we thought indeed this was related to renaming the build job. But the problem also occurs for example when modifying the slaves in the configuration matrix.
            But sometimes, it does occur with free-style (not multi-configuration) jobs too.

          aeschbacher added a comment - Same main usage for us: heavy use of "copy artifacts" plugin, matrix projects (with "Matrix tie parent" plugin) ssh slaves At the beginning, we thought indeed this was related to renaming the build job. But the problem also occurs for example when modifying the slaves in the configuration matrix. But sometimes, it does occur with free-style (not multi-configuration) jobs too.

          michael d added a comment -

          Also experiencing this in 1.499 on RHEL.
          Reloading configuration from disk seems to solve it.

          Can't find any pattern to reproduce this, and the logs doesn't say anything.
          Voodoo

          michael d added a comment - Also experiencing this in 1.499 on RHEL. Reloading configuration from disk seems to solve it. Can't find any pattern to reproduce this, and the logs doesn't say anything. Voodoo

          I've seen it regularly with renamed jobs. We have heavy use of copy artifacts, no matrix use, several ssh slaves. Almost all free-style jobs (though may have a maven one in there too).

          Nathan Neulinger added a comment - I've seen it regularly with renamed jobs. We have heavy use of copy artifacts, no matrix use, several ssh slaves. Almost all free-style jobs (though may have a maven one in there too).

          Running on fc17 x86_64.

          Nathan Neulinger added a comment - Running on fc17 x86_64.

          Linards L added a comment -

          For me fix, using v1.494 was simply copy excisting faulty job to new one.

          Probably the renaming of job / project causes this. The ridicilous side effect of this is that there are no validation check for current build number and just created / [successfuly] built and archived artifacts. If there would be simple excistance check that would valida that last build actually created artifacts and they ARE ACCESSIBLE to user, In my build system that would be pretty minor issue. Currently hitting on this is pretty pain-in-ass catogrizable one...

          Always wonder why to implement new features instead of simply avoid / fix blockers lurking everywhere in jenkins core :/

          Linards L added a comment - For me fix, using v1.494 was simply copy excisting faulty job to new one. Probably the renaming of job / project causes this. The ridicilous side effect of this is that there are no validation check for current build number and just created / [successfuly] built and archived artifacts. If there would be simple excistance check that would valida that last build actually created artifacts and they ARE ACCESSIBLE to user, In my build system that would be pretty minor issue. Currently hitting on this is pretty pain-in-ass catogrizable one... Always wonder why to implement new features instead of simply avoid / fix blockers lurking everywhere in jenkins core :/

          As a matter of fact I renamed these builds some time ago as well... This might be one part of the issue.

          Ivaylo Bratoev added a comment - As a matter of fact I renamed these builds some time ago as well... This might be one part of the issue.

          I have had this issue occur without ever renaming any jobs, so I'm afraid the bug is a little more complicated. When it does happen, it is often for more than one job. Without doing anything, the problem can disappear the next time I open the web interface. Or sometimes it is more persistent and doesn't go away until I reload configuration from disk. I have versions 1.497 and 1.500 running on two separate WS2008R2 servers and it has happend on both. Fortunately, it hasn't happened very often lately...

          Richard Merrill added a comment - I have had this issue occur without ever renaming any jobs, so I'm afraid the bug is a little more complicated. When it does happen, it is often for more than one job. Without doing anything, the problem can disappear the next time I open the web interface. Or sometimes it is more persistent and doesn't go away until I reload configuration from disk. I have versions 1.497 and 1.500 running on two separate WS2008R2 servers and it has happend on both. Fortunately, it hasn't happened very often lately...

          Linards L added a comment -

          Ok. First of all - devs got to understand the point this nonsense started. On my second machine, using v1.454, this has never happened. Others ... ? Seems like Jenkins infrastructure does not have any way to do some regression tests ... like, for example, WineHQ guys got ...

          Linards L added a comment - Ok. First of all - devs got to understand the point this nonsense started. On my second machine, using v1.454, this has never happened. Others ... ? Seems like Jenkins infrastructure does not have any way to do some regression tests ... like, for example, WineHQ guys got ...

          Hello,
          We had a similar problem with v 1.483. It happened on several jobs which were never renamed. Reloading the configuration restored the situation for most jobs. For few, some builds were missing in the history. After analysis, the missing builds were also lacking the build.xml in the "build/<TimeStamp>" folder.

          jenkinsuserfrance jenkinsuserfrance added a comment - Hello, We had a similar problem with v 1.483. It happened on several jobs which were never renamed. Reloading the configuration restored the situation for most jobs. For few, some builds were missing in the history. After analysis, the missing builds were also lacking the build.xml in the "build/<TimeStamp>" folder.

          Julian Taylor added a comment -

          for me it happens since lazy loading of build records was introduced, so version 1.485.
          I haven't verified that reverting back to 84 fixes it.

          the issue of jenkinsuserfrance in 483 is probably something else, because the build/ folder is perfectly alright it just does not load whats in there.

          Julian Taylor added a comment - for me it happens since lazy loading of build records was introduced, so version 1.485. I haven't verified that reverting back to 84 fixes it. the issue of jenkinsuserfrance in 483 is probably something else, because the build/ folder is perfectly alright it just does not load whats in there.

          Jesse Glick added a comment -

          Right, it is entirely possible there are two or more unrelated bugs lumped together here: at least, one present already in older versions of unknown cause; and one triggered by job renaming in 1.485+.

          Jesse Glick added a comment - Right, it is entirely possible there are two or more unrelated bugs lumped together here: at least, one present already in older versions of unknown cause; and one triggered by job renaming in 1.485+.

          bbonn added a comment -

          Hi all,

          I have found a way to reproduce in our environment. Not sure if this will be the same for all of you, but maybe it is a start for debugging. I noticed recently that while un-shelving a job (plugin we use quite frequently) a job that was running displayed a strange 404 error in the console. See below

          15:27:09 [WARNINGS] Parsing warnings in console log...
          15:27:09 Archiving artifacts
          Status Code: 404
          Exception:
          Stacktrace:
          (none)

          I then went back job page and refreshed and the build had disappeared. A reload from disk brought it back as usual. I have recreated a couple of times now, so not sure if the Shelve plugin is the culprit or some other underlying piece that interacts with plugins.

          Also, I don't always get the 404 in the console, sometimes after starting a un-shelving job, the link for the build console goes right tot a generic 404 error page.

          Jenkins 1.480.1
          Windows Server 2008 R2 (Master and Slave)

          Can anyone else recreate this way?

          bbonn added a comment - Hi all, I have found a way to reproduce in our environment. Not sure if this will be the same for all of you, but maybe it is a start for debugging. I noticed recently that while un-shelving a job (plugin we use quite frequently) a job that was running displayed a strange 404 error in the console. See below 15:27:09 [WARNINGS] Parsing warnings in console log... 15:27:09 Archiving artifacts Status Code: 404 Exception: Stacktrace: (none) I then went back job page and refreshed and the build had disappeared. A reload from disk brought it back as usual. I have recreated a couple of times now, so not sure if the Shelve plugin is the culprit or some other underlying piece that interacts with plugins. Also, I don't always get the 404 in the console, sometimes after starting a un-shelving job, the link for the build console goes right tot a generic 404 error page. Jenkins 1.480.1 Windows Server 2008 R2 (Master and Slave) Can anyone else recreate this way?

            Unassigned Unassigned
            bbonn bbonn
            Votes:
            61 Vote for this issue
            Watchers:
            116 Start watching this issue

              Created:
              Updated:
              Resolved: