Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-11547

Jobs trigger continually even though there are no changes due to git repository being "corrupt"

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Minor Minor
    • git-plugin
    • None

      There is a problem with the git polling mechanism which is causing all our jobs to kick themselves off continually. This happens at random times and just fixes itself, but is causing us all sorts of problems due to the large number of builds triggered.

      This is an example of the git polling log:

       
      Started on 28-Oct-2011 03:20:22
      Using strategy: Default
      [poll] Last Build : #480
      [poll] Last Built Revision: Revision abcb8a2492b390521e0c720f96f66a88eae09f18 (origin/master)
      Workspace has a .git repository, but it appears to be corrupt.
      No Git repository yet, an initial checkout is required
      Done. Took 0.26 sec
      Changes found
      

      This is caused when a "git rev-parse --verify HEAD" fails for some reason, but there is no logging to help in what might have gone wrong. It looks like the try/catch around the validateRevision line is too simplistic and the cause of the exception should be considered before returning false.

          [JENKINS-11547] Jobs trigger continually even though there are no changes due to git repository being "corrupt"

          I'm seeing the same issue in many of our repositories I configured in Jenkins. It would be great if this issue can be fixed.

          Murali Srinivasan added a comment - I'm seeing the same issue in many of our repositories I configured in Jenkins. It would be great if this issue can be fixed.

          marlene cote added a comment -

          We are also seeing builds getting kicked off when nothing has changed in the git repo. I hope you fix it soon.

          marlene cote added a comment - We are also seeing builds getting kicked off when nothing has changed in the git repo. I hope you fix it soon.

          Ulli Hafner added a comment -

          Here is an example of a 'continuously' building job: http://ci.jenkins-ci.org/view/Plugins/job/plugins_analysis-core/

          Ulli Hafner added a comment - Here is an example of a 'continuously' building job: http://ci.jenkins-ci.org/view/Plugins/job/plugins_analysis-core/

          Corey Groves added a comment -

          Seems to be Windows specific in our case. Our Linux build machines don't show this problem but all of our Windows build servers do.

          Corey Groves added a comment - Seems to be Windows specific in our case. Our Linux build machines don't show this problem but all of our Windows build servers do.

          Corey Groves added a comment -

          I would suggest this should be upgraded to blocking. The 20 or so jobs we have converted to Git are rendering the build server unusable since they are constantly firing.

          Corey Groves added a comment - I would suggest this should be upgraded to blocking. The 20 or so jobs we have converted to Git are rendering the build server unusable since they are constantly firing.

          This is really generating lot of unwanted build in my build server and unnecessarily triggering lot of emails.

          Murali Srinivasan added a comment - This is really generating lot of unwanted build in my build server and unnecessarily triggering lot of emails.

          nanda kishore added a comment -

          This problem didn't show up, if I make the builds to ping the scm in a definite order, and not at the same time. Meaning, first job pings scm at 1st min, second job 2nd min etc... So there wont be any congestion in the build pipeline. Previously all the jobs were pinging the scm at the same time and that could have resulted in the corrupt repos. I am not sure. I am just guessing. So now with an orderly pinging, the corrupt repos issue didn't show. THere aren't any unnecessary builds.

          nanda kishore added a comment - This problem didn't show up, if I make the builds to ping the scm in a definite order, and not at the same time. Meaning, first job pings scm at 1st min, second job 2nd min etc... So there wont be any congestion in the build pipeline. Previously all the jobs were pinging the scm at the same time and that could have resulted in the corrupt repos. I am not sure. I am just guessing. So now with an orderly pinging, the corrupt repos issue didn't show. THere aren't any unnecessary builds.

          @nanda Kishore, I tried your approach and still no luck.

          Murali Srinivasan added a comment - @nanda Kishore, I tried your approach and still no luck.

          Corey Groves added a comment -

          With some debugging it looks like the Launcher is sometimes embedding the message:
          Process leaked file descriptors. See http://wiki.jenkins-ci.org/display/JENKINS/Spawning+processes+from+build for more information
          on the second line of the returned text. This causes a multi-line response exception in the firsLine method of GitAPI.java

          Corey Groves added a comment - With some debugging it looks like the Launcher is sometimes embedding the message: Process leaked file descriptors. See http://wiki.jenkins-ci.org/display/JENKINS/Spawning+processes+from+build for more information on the second line of the returned text. This causes a multi-line response exception in the firsLine method of GitAPI.java

          nanda kishore added a comment - - edited

          There is a pull-request mentioning the fix related to this issue, here:https://github.com/jenkinsci/git-plugin/pull/41
          I am suprised to find that its "closed" and not merged. Any ideas why ?

          nanda kishore added a comment - - edited There is a pull-request mentioning the fix related to this issue, here: https://github.com/jenkinsci/git-plugin/pull/41 I am suprised to find that its "closed" and not merged. Any ideas why ?

          Any combination of Jenkins/git-plugin where this does not occur?

          Valentin Rentschler added a comment - Any combination of Jenkins/git-plugin where this does not occur?

          oliver zemann added a comment -

          This bug is still open. And its the worst bug i have ever seen in jenkins! Is there any workaround expect switching to another CI?

          oliver zemann added a comment - This bug is still open. And its the worst bug i have ever seen in jenkins! Is there any workaround expect switching to another CI?

          oliver zemann added a comment -

          [poll] Last Built Revision: Revision 671bae9b14bb9e65642ec808c2aee5f85aa0f87a (origin/HEAD, origin/master)
          ERROR: Workspace has a .git repository, but it appears to be corrupt.
          hudson.plugins.git.GitException: Result has multiple lines
          at hudson.plugins.git.GitAPI.firstLine(GitAPI.java:307)
          at hudson.plugins.git.GitAPI.validateRevision(GitAPI.java:281)
          at hudson.plugins.git.GitAPI.hasGitRepo(GitAPI.java:121)
          at hudson.plugins.git.GitSCM$1.invoke(GitSCM.java:736)
          at hudson.plugins.git.GitSCM$1.invoke(GitSCM.java:729)
          at hudson.FilePath.act(FilePath.java:842)
          at hudson.FilePath.act(FilePath.java:824)
          at hudson.plugins.git.GitSCM.compareRemoteRevisionWith(GitSCM.java:729)
          at hudson.scm.SCM._compareRemoteRevisionWith(SCM.java:356)
          at hudson.scm.SCM.poll(SCM.java:373)
          at hudson.model.AbstractProject.poll(AbstractProject.java:1363)
          at hudson.triggers.SCMTrigger$Runner.runPolling(SCMTrigger.java:420)
          at hudson.triggers.SCMTrigger$Runner.run(SCMTrigger.java:449)
          at hudson.util.SequentialExecutionQueue$QueueEntry.run(SequentialExecutionQueue.java:118)
          at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
          at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source)
          at java.util.concurrent.FutureTask.run(Unknown Source)
          at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
          at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
          at java.lang.Thread.run(Unknown Source)
          No Git repository yet, an initial checkout is required
          Done. Took 21 sec
          Changes found

          thats what i also found on some repo's, maybe it helps to find the problem.

          oliver zemann added a comment - [poll] Last Built Revision: Revision 671bae9b14bb9e65642ec808c2aee5f85aa0f87a (origin/HEAD, origin/master) ERROR: Workspace has a .git repository, but it appears to be corrupt. hudson.plugins.git.GitException: Result has multiple lines at hudson.plugins.git.GitAPI.firstLine(GitAPI.java:307) at hudson.plugins.git.GitAPI.validateRevision(GitAPI.java:281) at hudson.plugins.git.GitAPI.hasGitRepo(GitAPI.java:121) at hudson.plugins.git.GitSCM$1.invoke(GitSCM.java:736) at hudson.plugins.git.GitSCM$1.invoke(GitSCM.java:729) at hudson.FilePath.act(FilePath.java:842) at hudson.FilePath.act(FilePath.java:824) at hudson.plugins.git.GitSCM.compareRemoteRevisionWith(GitSCM.java:729) at hudson.scm.SCM._compareRemoteRevisionWith(SCM.java:356) at hudson.scm.SCM.poll(SCM.java:373) at hudson.model.AbstractProject.poll(AbstractProject.java:1363) at hudson.triggers.SCMTrigger$Runner.runPolling(SCMTrigger.java:420) at hudson.triggers.SCMTrigger$Runner.run(SCMTrigger.java:449) at hudson.util.SequentialExecutionQueue$QueueEntry.run(SequentialExecutionQueue.java:118) at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) at java.util.concurrent.FutureTask.run(Unknown Source) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) No Git repository yet, an initial checkout is required Done. Took 21 sec Changes found thats what i also found on some repo's, maybe it helps to find the problem.

          L J added a comment -

          I had success working around this by deactivating all jobs and then slowly activating them. The error seemed to be caused by too many jobs making git requests at once. Scripts pasted below. The second one times out (proxy 502 error) but still runs to completion. Basic idea stolen from a Jenkins wiki page.

          import hudson.model.*

          // For each project
          for(item in Hudson.instance.items) {
          println("JOB : "+item.name)
          item.disabled=true
          item.save()
          println("\n=======\n")
          }
          -----------------------------------------------------

          import hudson.model.*

          // For each project
          for(item in Hudson.instance.items) {
          println("JOB : "+item.name)
          if (item.disabled == true)

          { item.disabled=false item.save() Object.sleep(25000) }

          }

          L J added a comment - I had success working around this by deactivating all jobs and then slowly activating them. The error seemed to be caused by too many jobs making git requests at once. Scripts pasted below. The second one times out (proxy 502 error) but still runs to completion. Basic idea stolen from a Jenkins wiki page. import hudson.model.* // For each project for(item in Hudson.instance.items) { println("JOB : "+item.name) item.disabled=true item.save() println("\n=======\n") } ----------------------------------------------------- import hudson.model.* // For each project for(item in Hudson.instance.items) { println("JOB : "+item.name) if (item.disabled == true) { item.disabled=false item.save() Object.sleep(25000) } }

          Daniel Haskin added a comment -

          We have the same problem, but the git executable is flock'ed on our server, so that only one process can actually run the git at one time. The issue still persists for us, however.

          Daniel Haskin added a comment - We have the same problem, but the git executable is flock'ed on our server, so that only one process can actually run the git at one time. The issue still persists for us, however.

          Justin Zaun added a comment -

          Hi everyone. I registered this issue in the "kickstarting" section on FreedomSponsors. This means that if you need this issue that bad, you can go to http://www.freedomsponsors.org/core/issue/113/jobs-trigger-continually-even-though-there-are-no-changes-due-to-git-repository-being-corrupt and offer a few bucks for it.

          Justin Zaun added a comment - Hi everyone. I registered this issue in the "kickstarting" section on FreedomSponsors. This means that if you need this issue that bad, you can go to http://www.freedomsponsors.org/core/issue/113/jobs-trigger-continually-even-though-there-are-no-changes-due-to-git-repository-being-corrupt and offer a few bucks for it.

          Justin Zaun added a comment -

          Is it at least possible to not trigger the build on this error and just wait till the next check?

          Justin Zaun added a comment - Is it at least possible to not trigger the build on this error and just wait till the next check?

          fixed by switching to JGit implementation in git-client plugin
          using git cli and parsing output is a fragile solution that I expect to fully replace with pure JGit

          Nicolas De Loof added a comment - fixed by switching to JGit implementation in git-client plugin using git cli and parsing output is a fragile solution that I expect to fully replace with pure JGit

          Sean Flanigan added a comment -

          It seems that JGit was later disabled by default in git-client-plugin 1.0.5 because of problems with JGit.

          Is there a solution for this continual trigger problem when using git cli?

          Sean Flanigan added a comment - It seems that JGit was later disabled by default in git-client-plugin 1.0.5 because of problems with JGit. Is there a solution for this continual trigger problem when using git cli?

          Reopening the issue as JGit has problems on its own

          Oliver Gondža added a comment - Reopening the issue as JGit has problems on its own

          Narayana katooru added a comment - - edited

          Noticed  this on my jenkins. Is this still open?

          Narayana katooru added a comment - - edited Noticed  this on my jenkins. Is this still open?

          It is funny that this issue still exists after 8 years.

          Murali Srinivasan added a comment - It is funny that this issue still exists after 8 years.

          Mark Waite added a comment -

          chandramuralis I'm happy to consider a pull request from someone with tests that show the problem, code changes to resolve the problem, and a detailed explanation of the root cause of the problem. I've not encountered this problem in my use of the git plugin.

          Mark Waite added a comment - chandramuralis I'm happy to consider a pull request from someone with tests that show the problem, code changes to resolve the problem, and a detailed explanation of the root cause of the problem. I've not encountered this problem in my use of the git plugin.

            Unassigned Unassigned
            james_cookie James Cook
            Votes:
            19 Vote for this issue
            Watchers:
            31 Start watching this issue

              Created:
              Updated: