-
Bug
-
Resolution: Fixed
-
Major
-
Jenkins 2.10, github-branch-source-plugin 1.7
-
Powered by SuggestiMate
We have quite a large organization in Github, lots of repos with lots of branches. Running the organization scan works well for a while, and then begins erroring out with
org.jenkinsci.plugins.github_branch_source.RateLimitExceededException: GitHub API rate limit exceeded
This is pretty rough since it always starts in the same place, and I haven't been able to figure out ANY way to add a project from the back of the list that recently added a Jenkinsfile. I am using a valid set of credentials (all the repos are private, so it wouldn't work otherwise anyway).
- github-branch-source.hpi
- 1.70 MB
- github-api.hpi
- 2.01 MB
- github-branch-source.hpi
- 1.70 MB
- github-branch-source.hpi
- 1.70 MB
- github-api.hpi
- 2.01 MB
- branch-api.hpi
- 234 kB
- cloudbees-folder.hpi
- 185 kB
- github-branch-source.hpi
- 1.70 MB
- screenshot-1.png
- 39 kB
- is duplicated by
-
JENKINS-33490 RateLimitExceededException not handled well inside iterators
-
- Closed
-
-
JENKINS-37866 The plugin spams Github API
-
- Closed
-
-
JENKINS-41332 Plugin should be more resilient to network or rate-limit errors
-
- Closed
-
- is related to
-
JENKINS-38937 GitHub API cache is not working
-
- Closed
-
-
JENKINS-34600 Improve the performance of scheduling a build
-
- Closed
-
-
JENKINS-37866 The plugin spams Github API
-
- Closed
-
- relates to
-
JENKINS-33490 RateLimitExceededException not handled well inside iterators
-
- Closed
-
-
JENKINS-41112 GitHub Branch Source should throttle calls to stay below rate limit
-
- Closed
-
-
JENKINS-42400 GitHub Branch source plugin hits rate limit too easily
-
- Closed
-
- links to
[JENKINS-36121] Github Branch Source plugin trips api rate limit
Could it make some sub 5000 number of requests, remember where it was, and pick it up again in the next hour? Doesn't the other github plugin do something like that?
I'm seeing the rate limit exhausted on a repo with just a hundred or so branches - this repo has a similar number of PRs, so the existing options about what gets built don't provide me with any help.
Is there any way I can control the frequency of branch indexing? It seems to be happening far, far too frequently for this repo (every few minutes by the looks?!)
Is it doing something silly like triggering a full branch indexing run when it gets a 'new branch created' webhook? Otherwise I can't see why it would need to reindex so often.
Seems to be exacerbated by lack of visibility into how often branch indexing is triggering (you can only see the last run, not the frequency or history of runs, so it's easy to miss the fact it's scanning every few minutes, and consuming vast amounts of API requests.)
Reading https://github.com/jenkinsci/github-branch-source-plugin/blob/master/src/main/java/org/jenkinsci/plugins/github_branch_source/PullRequestGHEventSubscriber.java#L100 - it certainly appears than any PR event will cause a full branch reindexing. Is that right?
Well, given that the downside of this is noted at JENKINS-34600, it's definitely not fair to complain to Github about that jglick. You're expected to authenticate the webhook and use the information it contains, not use it as a trigger for an unbounded number of API requests...
Had a quick go at a throwaway implementation of a fix. Mainly a hacky use of ParameterizedJobMixIn.scheduleBuild2 to schedule a new build for the branches specified in a push event.
However, I quickly ran into JENKINS-37920 - GBS's GitHubWebhookListenerImpl cannot receive the information it needs from github-plugin. (I notice github-plugin also correctly deals with the secret if one is configured, so if it weren't for the fact it's so hard to get the relevant information out, we could easily check for an authenticated webhook, parse the list of branches/PRs, and do the right thing)
In the meantime, I've had to completely disable webhooks.
If you simply created a few dozen branches in the multibranch-demo repo, you'd be easily able to reproduce this issue after a few pushes. After all, the set up instructions say to "Add a new webhook, ask to Send me everything" - which is a configuration in which branch indexing will trigger several times as you just click around GitHub e.g. adding labels to PRs.
My hacky fix branch is at https://github.com/vend/github-branch-source-plugin/pull/1 it'll only work for PR builds for now Comments welcome.
In this implementation, each PR event webhook triggers up to two builds (merged/unmerged), and takes about one Github API request to do so (I think, checking the collaborators for .isTrusted? Maybe 2-3). Pushes to a PR that is already open won't trigger a build, which is a shame (need to rewrite this logic into a pull event subscriber too). But editing the PR, or adding/removing a label should.
OK, I've submitted an upstream PR at https://github.com/jenkinsci/github-branch-source-plugin/pull/74
dominics jglick it seems that if we can assume webhook payloads are authenticated, then it is possible to trust them. If not, then it is a bit philosphical. At the moment they aren't required to be trusted right? And not trusting them means you can only use them as a trigger to go fetch data?
michaelneale Yeah, good summary, I think.
What's really necessary, IMO, is some flag/signal/getter from github-api that the signature validation has taken place. Then we can smartly either trust the content of the hook, or if we don't, go out and do some sort of 'single-branch reindexing' head-fetch that only consumes a few API requests. (The validation already exists, and takes place if you configure it. It's just unconfigured by default.)
These are incremental improvements. i.e. once we know whether the secret was validated, we can leave full branch reindexing in place for cases when the hook isn't validated. Then add the single-branch "fetch this head from github because the hook wasn't validated" behaviour as an optimisation (basically, just for those too lazy to set up webhooks with a secret? much lower priority)
Against that, however, even triggering full-branch reindexing from unvalidated webhooks would be enough to cause DoS via this issue - so, we should probably completely ignore unvalidated hooks altogether. I don't understand the use-case for not having a secret in place.
I don't understand the use-case for not having a secret in place.
Only that the github plugin does not support secret validation on webhooks, at least last I checked.
github plugin does not support secret validation on webhooks, at least last I checked.
No, like I said, it does validate the secret if you provide one. It just doesn't require one, nor allow other plugins to check for the presence of one.
it does validate the secret if you provide one
Where do you see such code? I do not see it here or here or here.
Not sure where the code is, but it looks like this in configuration:
I assume these strings should make it easy to track down I've also verified that it's functional, and you get an error logged if the signature is wrong.
If I understand correctly, caching was disabled because some stale data is being provided by Github.
Because of such issue this PR was created: https://github.com/jenkinsci/github-branch-source-plugin/pull/54
It provides two solutions to the problem:
1. introduce a 5 seconds delay
2. disable the cache
Are we sure both solutions were needed? According to the PR, and a conversation I had with some Github representatives, the delay should be a sufficient solution.
I also opened another bug related to this one, here:
JENKINS-38937
I opened that bug as it appears that the github cache is not working (no folder, no cache files)
I am linking that bug to this one, as I also have a lot of information from github's support organization, with the breakdown of exactly which APIs were hit (percentages and totals)
Over 20% of our rate limit was taken with what github support says are static or near-static requests.
Any ideas here? Even with separate accounts for each repository our organization hits the api limit constantly due to a large number of branches/PRs.
Hello. I work for a large organization that also ran into this problem. We ended up forking the plugin and adding the caching back in. It has worked great for us!
https://github.com/IntuitiveWebSolutions/github-branch-source-plugin
diff --git a/src/main/java/org/jenkinsci/plugins/github_branch_source/Connector.java b/src/main/java/org/jenkinsci/plugins/github_branch_source/Connector.java index 7838600..41a23b5 100644 --- a/src/main/java/org/jenkinsci/plugins/github_branch_source/Connector.java +++ b/src/main/java/org/jenkinsci/plugins/github_branch_source/Connector.java @@ -35,6 +35,7 @@ import com.cloudbees.plugins.credentials.domains.DomainRequirement; import com.cloudbees.plugins.credentials.domains.URIRequirementBuilder; import com.google.common.hash.Hashing; +import com.squareup.okhttp.Cache; import com.squareup.okhttp.OkHttpClient; import com.squareup.okhttp.OkUrlFactory; import hudson.Util; @@ -53,6 +54,7 @@ import org.apache.commons.lang.StringUtils; import org.jenkinsci.plugins.gitclient.GitClient; import org.jenkinsci.plugins.github.config.GitHubServerConfig; +import org.jenkinsci.plugins.github.internal.GitHubClientCacheOps; import org.kohsuke.github.GitHub; import org.kohsuke.github.GitHubBuilder; import org.kohsuke.github.RateLimitHandler; @@ -60,6 +62,7 @@ import static org.apache.commons.lang3.StringUtils.trimToEmpty; import static org.jenkinsci.plugins.github.config.GitHubServerConfig.GITHUB_URL; +import static org.jenkinsci.plugins.github.internal.GitHubClientCacheOps.toCacheDir; /** * Utilities that could perhaps be moved into {@code github-api}. @@ -83,6 +86,7 @@ } public static @Nonnull GitHub connect(@CheckForNull String apiUri, @CheckForNull StandardCredentials credentials) throws IOException { + GitHubServerConfig config = new GitHubServerConfig(credentials!=null ? credentials.getId(): null); String apiUrl = Util.fixEmptyAndTrim(apiUri); String host; try { @@ -97,6 +101,11 @@ gb.withRateLimitHandler(CUSTOMIZED); OkHttpClient client = new OkHttpClient().setProxy(getProxy(host)); + client.setCache(GitHubClientCacheOps.toCacheDir().apply(config)); + if (config.getClientCacheSize() > 0) { + Cache cache = toCacheDir().apply(config); + client.setCache(cache); + } gb.withConnector(new OkHttpConnector(new OkUrlFactory(client)));
It was part of a solution meant to solve https://issues.jenkins-ci.org/browse/JENKINS-34727. As said in an earlier comment, you can see PR https://github.com/jenkinsci/github-branch-source-plugin/pull/54 for the discussion. From what we've seen, the real fix was the time delay in this file https://github.com/jenkinsci/github-branch-source-plugin/pull/54/files#diff-4977ee4775fa12cb179cc309dcaece54R50. So we took out the part of that PR removing the cache, and it has gone well.
Feel free to pull down our fork https://github.com/IntuitiveWebSolutions/github-branch-source-plugin and try it out for yourself. https://wiki.jenkins-ci.org/display/JENKINS/Plugins#Plugins-Byhand
I'm seeing this issue as well (new to Jenkins) and it's bit of a head scratcher since this plugin is a dependency of Blueocean; to use the patched version I'd have to remove that.
Any thoughts on when this will make it to a release?
https://gist.github.com/bsodmike/b466db06a42848428055a4ee8948de78#gistcomment-1926771
bsodmike I am not sure if there is a PR open with that caching fix is there?
If you want to use the forked/patched version with blue ocean that will work fine (its not a tight dependency)
Thanks Michael – oddly, it started working so whatever was tripping the rate limit seemed to go away, even if just briefly. Will Yakshave this later on, ta.
stephenconnolly is introducing a major new system for event handling so this may become obsolete.
Please verify if this issue is an issue with GitHub Branch Source 2.0.0-beta-1 (available from the experimental update center now or 2.0.0 (available in early January 2017)
I don't have time to test it right now, and would rather not interrupt the rest of the engineers on my team if it isn't resolved.
I have a pretty good feeling that it is not resolved in that release candidate, as I cannot find any caching implemented for the api responses.
It's not that simple.
The changes in the APi mean that it no longer does a full scan for each event received. Now instead when an event is received only the affected branches are re-checked... if we adding caching on-top of that you would basically be masking the changes that the event has reported by the cache.
The event handling significantly reduced the amount of API limit consumed by the API. In some cases I have seen a reduction of 2-3 orders of magnitude.
But if you do not want to test this side of the new year that is perfectly understandable.
Awesome. That definitely sounds like a good change. Hopefully we can try it out some time in late January when things are supposed to settle down for us a bit.
Thanks for the info and update. I'll post here when we get the chance to try it.
I was running into this problem, I haven't run into this issue post installing this beta(and waiting for a bit so that Github API rate limits are reset)
FTR I filed JENKINS-41112 as a dupe with many details, so summarizing here. With an org with a real lot of repositories, this still fails, but I can relate this is indeed an order of magnitude better. From looking at the logs, I think we were almost done before hitting the limit. Like a gut feeling that if we had had say only 900 or 1000 Git repositories that could have worked.
We have an org with more than 1200 repositories, so at some point either the plugin should probably throttle calls, but also this seems pretty stupid that GitHub doesn't just allow us to pay for raising that limit a bit (I mean, I guess it's obvious that in general you're likely to need more calls on an org with more repositories...), but that's another discussion.
Sample of messages that can be seen in jenkins.log:
Jan 24, 2017 9:10:27 PM hudson.model.Executor finish1
SEVERE: Executor threw an exception
java.lang.Error: org.jenkinsci.plugins.github_branch_source.RateLimitExceededException: GitHub API rate limit exceeded
at org.kohsuke.github.Requester$PagingIterator.fetch(Requester.java:506)
at org.kohsuke.github.Requester$PagingIterator.hasNext(Requester.java:471)
at org.kohsuke.github.PagedIterator.fetch(PagedIterator.java:44)
at org.kohsuke.github.PagedIterator.hasNext(PagedIterator.java:32)
at org.jenkinsci.plugins.github_branch_source.GitHubSCMNavigator.visitSources(GitHubSCMNavigator.java:291)
at jenkins.branch.OrganizationFolder.computeChildren(OrganizationFolder.java:398)
at com.cloudbees.hudson.plugins.folder.computed.ComputedFolder.updateChildren(ComputedFolder.java:219)
at com.cloudbees.hudson.plugins.folder.computed.FolderComputation.run(FolderComputation.java:141)
at jenkins.branch.OrganizationFolder$OrganizationScan.run(OrganizationFolder.java:849)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:410)
Caused by: org.jenkinsci.plugins.github_branch_source.RateLimitExceededException: GitHub API rate limit exceeded
at org.jenkinsci.plugins.github_branch_source.Connector$1.onError(Connector.java:177)
at org.kohsuke.github.Requester.handleApiError(Requester.java:649)
at org.kohsuke.github.Requester$PagingIterator.fetch(Requester.java:500)
... 10 more
Jan 24, 2017 9:10:53 PM com.squareup.okhttp.ConnectionPool pruneAndGetAllocationCount
WARNING: A connection to https://api.github.com/ was leaked. Did you forget to close a response body?
stephenconnolly this is still an issue. And on a scan of an Org initially, we are guaranteed to hit this limit, and never be able to complete the Org scan. How does one proceed in this event?
As a workaround, I have created multiple GitHub Organization Folders, with a Repository name pattern to limit the number of matching repositories in that folder. Then ensure the Build Periodically is set to H H * * *, which will spread the automatic re-scans throughout the day.
For my company it made sense to group repos by the team responsible for that repository, but I imagine a simple alphabetical partition would work fine too. Note that the search functionality (upper right corner) works fine no matter how the folders are organized.
I believe all of this is supposed to be resolved (no workarounds needed) in the SCM 2.0 plugins, which have recently been re-cleared for production install (but I've not tried yet myself).
morgan_goose Correct this is still an issue, but it is not as bad. I have some changes that we are testing that proactively delay operations to prevent the rate limit from being exhausted and allow the operations to complete. Those changes will resolve this issue but we decided that they were too much to add in with the change to SCM API 2.0.x's event API
The plan is to release the rate limit throttling fixes as we are confident in each fix, but the immediate priority is actually fixing JENKINS-36029 (which affects BitBucket not GitHub but is very very bad)
Yes I confirmed today - it is a whole lot better than it was (today I wasn't able to exhaust the api, which is good).
I'm having trouble following the 30+ comments all the other open issues surrounding Github rate limits, but this seems to have gotten much worse not better than me. Now when re-scanning hits the rate limit, not only can I not perform more builds or scan for new jobs, but the jobs that got scanned after the rate limit (myproject-c through myproject-g in my example) actually get deleted since the plugin concludes they no longer exist, then completes with "SUCCESS".
Is this plugin considered experimental or do you advise downgrading to 1.x? Has anyone had luck contacting Github & getting API rate limits increased? Really feeling dead in the water here as we need to disable all Github scanning for fear of losing current jobs and not being able to perform other Github operations.
Example log:
... many branches repos scanned successfully, followed by eventually many similar exceptions and finally projects that can no longer be scanned getting removed from our list of jobs:
ERROR: Failed to create or update a subproject my-project-a org.jenkinsci.plugins.github_branch_source.RateLimitExceededException: GitHub API rate limit exceeded at org.jenkinsci.plugins.github_branch_source.Connector$1.onError(Connector.java:259) at org.kohsuke.github.Requester.handleApiError(Requester.java:649) at org.kohsuke.github.Requester._to(Requester.java:284) at org.kohsuke.github.Requester.to(Requester.java:225) at org.kohsuke.github.GitHub.checkApiUrlValidity(GitHub.java:669) at org.jenkinsci.plugins.github_branch_source.GitHubSCMSource.retrieve(GitHubSCMSource.java:414) at jenkins.scm.api.SCMSource._retrieve(SCMSource.java:300) at jenkins.scm.api.SCMSource.fetch(SCMSource.java:254) at jenkins.branch.MultiBranchProjectFactory$BySCMSourceCriteria.recognizes(MultiBranchProjectFactory.java:260) at jenkins.branch.OrganizationFolder$SCMSourceObserverImpl$1.recognizes(OrganizationFolder.java:1153) at jenkins.branch.OrganizationFolder$SCMSourceObserverImpl$1.complete(OrganizationFolder.java:1168) at org.jenkinsci.plugins.github_branch_source.GitHubSCMNavigator.add(GitHubSCMNavigator.java:459) at org.jenkinsci.plugins.github_branch_source.GitHubSCMNavigator.visitSources(GitHubSCMNavigator.java:319) at jenkins.branch.OrganizationFolder.computeChildren(OrganizationFolder.java:398) at com.cloudbees.hudson.plugins.folder.computed.ComputedFolder.updateChildren(ComputedFolder.java:219) at com.cloudbees.hudson.plugins.folder.computed.FolderComputation.run(FolderComputation.java:141) at jenkins.branch.OrganizationFolder$OrganizationScan.run(OrganizationFolder.java:849) at hudson.model.ResourceController.execute(ResourceController.java:98) at hudson.model.Executor.run(Executor.java:404) [Fri Feb 10 03:44:40 UTC 2017] Finished organization scan. Scan took 9 min 21 sec Evaluating orphaned items in My Organization. Will not remove myproject-b as MyOrganization Inc. » myproject-b » master #3 is still in progress Will remove myproject-c as it is #1 in the list Will remove myproject-d as it is #2 in the list Will remove myproject-e as it is #3 in the list Will remove myproject-f as it is #4 in the list Will remove myproject-g as it is #5 in the list Finished: SUCCESS
Reposting our success with a quick patch of 1.1 for John and anyone in a similar situation.
We greatly reduced our api usage and have had no problems with our fork for several months.
jhovell how many repos do you have in an org? are they all private repos?
michaelneale - 120 repos, all but ~5 are private, most of which do not use this plugin (yet) but many which sadly have a large number of branches (I found 1 repo with 250 branches, an artifact of some other tool). We're experimenting using branch name filters for our critical projects, but since the repo filter needs to be a regex and we have a large number of repos it seems challenging to maintain a regex that includes the right repos.
An aside - looks like each repo scanned is requiring ~10 API calls per repo per branch? I have been curling https://api.github.com/rate_limit as the repo refresh process is occurring & can watch it steadily drop.... scanning 2 branches on all repos consumed about 2000 API calls from the 5000 limit per hour github enforces for authenticated users. Still it means we need to make sure no one triggers a rescan more than once per hour at most.
spockninja - thank you I did see your comment earlier. I am not a Jenkins plugin developer & wasn't quite sure how to build & install a patch to a plugin. If there are some documentation on how to do this I'd definitely give it a try though I'm not sure what drawbacks exist (other than the obvious needing to keep your fork maintained as the project moves forward)
Thanks for the quick response! The power of this plugin is awesome, it's changed the way we work, just challenging to keep running for larger orgs.
Thanks jhovell - wow that doesn't sound too huge. I know the ci.jenkins.io project is using this itself (or trying to) with 1000's of repos (it understandably has problems).
This doesn't sound like it should be happening - 10 API calls per branch per repo stephenconnolly? Is that expected?
FYI it will be hard to downgrade to a 1.1 version (if you do, make sure you backup things etc) if you want to try it. But hopefully we can sort out the SCM 2 flavour of things, as your use case is spot on what this should work for, with no issues at all. Glad this functionality is of value to you, it does make things a whole lot easier when it works.
Having the same problem with Rate limit exceeding, although I've got like 150 repos in organization, but real long history of branches. I was reading the sources of the plugin "Github Branch Source" and found a call to retrieve collaborators when querying the particular repository which ends up to API request to Collaborators endpoint. Seems odd to me because it does not use that particular local collaboratorNames variable any further in the body of the method. It will save 150 requests for me, but even more for people with thousands of repositories.
2 cents.
Noticed the collaborators call (it's in the organization sync log as well) and thought it was strange as I wasn't sure why it would be needed. Happy to provide full logs or anything else if it's of help debugging.
Figured out that collaboratorNames is used to identify "trusted" users of the repository. Although if the repository is private it does not make sense to assume any pull request as not trusted, because only selected people have access to the repository anyway.
Although if the repository is private it does not make sense to assume any pull request as not trusted, because only selected people have access to the repository anyway.
Oh if only it were that simple. Some organizations will allow everyone in the org read access to the OPS repos but only the OPS team have write access. It is the write access to a repo that should gate whether the PR is trusted or not.
In any case, there is a solution to the rate limit problem, pro-active throttling. I have a PoC hack implementation in a Git Stash but there are some implications that need to be thought through carefully. For example: in this change to branch-api I have had to make the event handlers use synchronization in order to ensure that there are no file handle leaks. I will need to find a better fix (i.e. write more complex code) in order to work with the pro-active throttling as otherwise a rate limit throttle will effectively block all events from all sources.
As I said earlier, there is a much more critical bug in the BitBucket plugin that is currently #1 on my todo list. This issue is #2
Ok. The data loss issue for BitBucket (JENKINS-36029) and the general data loss issue (JENKINS-42000) have now both been fixed. This issue is now number 1 on my ToDo list.
Are there any volunteers for taking experimental builds when I have them ready?
I'll have to upgrade to 2.x first, but will be happy to give it a try.
Meanwhile I've got a very hacky workaround for this issue with a pipeline script that will check github organization and create the multibranch projects for all the repositories without doing any checks. See this gist for details.
Be aware that it uses Jenkins Job DSL plugin, so you will have to install it before running.
OK... Very experimental trial... use at your own risk. BACKUP EVERYTHING IMPORTANT TO YOU BEFORE INSTALLING
You will need:
1. Updated snapshot of cloudbee-folder
2. Updated snapshot of branch-api
3. Updated snapshot of github-api
4. Updated snapshot of github-branch-source
You will get:
- Log messages like this:
Started by user anonymous
Consulting GitHub Organization
Connecting to https://api.github.com with no credentials, anonymous access
API Rate Limit 60 with 0 remaining
API Rate Limit 60 with 0 remaining, sleeping until Mon Feb 20 18:15:22 GMT 2017
- I have not put the rate limit guards on every code path, so you may still end up tripping the rate limits, but the primary important paths all have the rate limit guards, so you should have your scan complete eventually.
- If another plugin is also using the same credentials, they will not be paying any attention to the branch source's needs and hence may cause the rate limit to trip anyway... but most people should not hit the limits (though an anonymous scan of even reasonable sized organizations will take hours 60 requests per hour is not much!)
cloudbees-folder.hpi branch-api.hpi
github-api.hpi
github-branch-source.hpi
Also note, you should see throttling kick in progressively when the rate limit is 75% consumed... it will progressively sleep for longer and longer, starting with 30 second sleeps, progressing through 1 and 5 minute sleeps until the rate limit is 99% consumed at which point it will sleep until the reset.
I am working on a different algorithm, but my aim is to proof out the approach in general.
if you need to roll back, you only need to roll back the GitHub Branch Source plugin. The other plugins should be fine to run.
cc paulchubatyy kiora morgan_goose there is some stuff to test above ^
all the images from this post are hosted on imgur so that this issue did not get trashed by attachments. Obviously I cannot add images from 3rd parties into this post, so please follow the links, they should be self-explanatory.
Testing on:
OS $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=14.04 DISTRIB_CODENAME=trusty DISTRIB_DESCRIPTION="Ubuntu 14.04.5 LTS" ================ Jenkins package itself $ apt-cache show jenkins Package: jenkins Architecture: all Version: 2.47 Priority: extra Section: devel Maintainer: Kohsuke Kawaguchi <kk@kohsuke.org> Installed-Size: 67107 Depends: daemon, adduser, procps, psmisc, net-tools, default-jre-headless (>= 2:1.7) | java7-runtime-headless Conflicts: hudson Replaces: hudson Filename: binary/jenkins_2.47_all.deb Size: 68329532 MD5sum: ce0ba54cf9b384af318e61330cb00b2d SHA1: 58930ad52cf6c49044a02663cb9f2cd066a99ff3 SHA256: ddfb5eafec356eebdce011f00ca2ba37b65f5168a3825eaff8b8668e8f3495c2 SHA512: 5aa113ca649efbfa7440f8aac446d54478ed2bbe236106b80ba3361c2a4b9a850a7bfc388022f49c0c2b2da06337d682061da56a8e2c0689f066866056d8af23 Homepage: http://jenkins.io/ $ sudo apt-get install jenkins Reading package lists... Done Building dependency tree Reading state information... Done jenkins is already the newest version.
Plugins installed: Github API and Github Branch Source Plugins, Folders Plugin, Branch API Plugin
Initial organization scan
Organization settings pretty straightforward, nothing special. Trigger scan every 5 minutes. Do not build anything automatically.
Result: exception raised and not handled
Scan failed.
paulchubatyy thanks for that, my anon scan completed just fine (after 4 hours) so I guess I got lucky with where the org repos fell.
The anon access token is probably going to be most problematic with this as 60 is really not much and you need to wait an hour between tests if they fail (or hop VPNs )
I expect that the -SNAPSHOT should work much better when given a real rate limit (i.e. 5000/hr)
github-branch-source.hpi New improved -SNAPSHOT
$ sha1sum github-branch-source.hpi d7b09b1ac67dab05079fb53b92808b1063061c9d github-branch-source.hpi
paulchubatyy this new snapshot should fix the issues for you (can still have the rate limit pulled out from under it by other plugins using the same credentials)
If you are using a valid API token, I am not sure what can be done about this other than complaining to GitHub that your rate limit is too low.