• Icon: Bug Bug
    • Resolution: Fixed
    • Icon: Critical Critical
    • Jenkins 2.32.3
      Bitbucket-branch-source-plugin 2.1.0 (working version 1.9)

      We have a team repository with about 300 repositories which is scanned for jenkins files.

      We updated Bitbucket-branch-source-plugin to 2.1.0 (working version 1.9) yesterday and now we hit the rate limit every time.

      This means that repositories that are last in the arbitrarily sorted list is never added. We think that this plugin should know about the rate limit and throttle it self. It should always handle newly updated repositories first.

      The best solution would be if Atlasian created a webhook in the team resource for repository creation. But we need a workaround in the mean time.

      [Fri Mar 03 05:29:24 UTC 2017] Finished organization scan. Scan took 8 min 23 sec
      FATAL: Failed to recompute children of {Removed Job name}
      com.cloudbees.jenkins.plugins.bitbucket.api.BitbucketRequestException: HTTP request error. Status: 429: Unknown Status Code.
      Rate limit for this resource has been exceeded
      	at com.cloudbees.jenkins.plugins.bitbucket.client.BitbucketCloudApiClient.getRequest(BitbucketCloudApiClient.java:568)
      	at com.cloudbees.jenkins.plugins.bitbucket.client.BitbucketCloudApiClient.getRepository(BitbucketCloudApiClient.java:232)
      	at com.cloudbees.jenkins.plugins.bitbucket.client.BitbucketCloudApiClient.isPrivate(BitbucketCloudApiClient.java:405)
      	at com.cloudbees.jenkins.plugins.bitbucket.BitbucketSCMSource.retrievePullRequests(BitbucketSCMSource.java:334)
      	at com.cloudbees.jenkins.plugins.bitbucket.BitbucketSCMSource.retrieve(BitbucketSCMSource.java:325)
      	at jenkins.scm.api.SCMSource._retrieve(SCMSource.java:300)
      	at jenkins.scm.api.SCMSource.fetch(SCMSource.java:254)
      	at jenkins.branch.MultiBranchProjectFactory$BySCMSourceCriteria.recognizes(MultiBranchProjectFactory.java:263)
      	at jenkins.branch.OrganizationFolder$SCMSourceObserverImpl$1.recognizes(OrganizationFolder.java:1266)
      	at jenkins.branch.OrganizationFolder$SCMSourceObserverImpl$1.complete(OrganizationFolder.java:1281)
      	at com.cloudbees.jenkins.plugins.bitbucket.BitbucketSCMNavigator.add(BitbucketSCMNavigator.java:212)
      	at com.cloudbees.jenkins.plugins.bitbucket.BitbucketSCMNavigator.visitSources(BitbucketSCMNavigator.java:187)
      	at jenkins.branch.OrganizationFolder.computeChildren(OrganizationFolder.java:399)
      	at com.cloudbees.hudson.plugins.folder.computed.ComputedFolder.updateChildren(ComputedFolder.java:219)
      	at com.cloudbees.hudson.plugins.folder.computed.FolderComputation.run(FolderComputation.java:154)
      	at jenkins.branch.OrganizationFolder$OrganizationScan.run(OrganizationFolder.java:850)
      	at hudson.model.ResourceController.execute(ResourceController.java:98)
      	at hudson.model.Executor.run(Executor.java:404)
      Finished: FAILURE
      

          [JENKINS-42458] Rate limit reached after plugin update

          Melvyn de Kort added a comment - - edited

          We have a partial fix in the making for this issue.

          Our change prevents repositories from being scanned when a job already exists for the repository.

          There are a few gaps though:

          1. When a repository doesn't have a working webhook, it will always be skipped from scanning, so no automatic builds will ever happen. This is not a problem when there is no pre-existing job for the repository or when webhooks are explicitly turned off in Jenkins.
          2. Jobs that already exist while scanning, are considered stale/orphan and will be deleted. Our workaround is to configure the plugin to never delete orphan jobs (Days to keep old items = 999). Consequently jobs for deleted repositories are never deleted.

          I suspect the plugin maintainer wouldn't accept this fix because of these gaps, but perhaps someone can use this as input for their own fix.

          Source code: https://github.com/jenkinsci/bitbucket-branch-source-plugin/compare/master...r-kok:prevent-rate-limiting

          Melvyn de Kort added a comment - - edited We have a partial fix in the making for this issue. Our change prevents repositories from being scanned when a job already exists for the repository. There are a few gaps though: When a repository doesn't have a working webhook, it will always be skipped from scanning, so no automatic builds will ever happen. This is not a problem when there is no pre-existing job for the repository or when webhooks are explicitly turned off in Jenkins. Jobs that already exist while scanning, are considered stale/orphan and will be deleted. Our workaround is to configure the plugin to never delete orphan jobs (Days to keep old items = 999). Consequently jobs for deleted repositories are never deleted. I suspect the plugin maintainer wouldn't accept this fix because of these gaps, but perhaps someone can use this as input for their own fix. Source code: https://github.com/jenkinsci/bitbucket-branch-source-plugin/compare/master...r-kok:prevent-rate-limiting

          phil swenson added a comment -

          Our solution is to switch to github

          phil swenson added a comment - Our solution is to switch to github

          I am having the same problem. Can we expect a fix for that or we need to stop using the automatic project discovery and creation and add them manually?

          Georgi Hristov added a comment - I am having the same problem. Can we expect a fix for that or we need to stop using the automatic project discovery and creation and add them manually?

          Georgi Hristov added a comment - - edited

          Another workaround is to use the "Repository name pattern" option. You can set a regex like (repo.|fullRepoName|anotherFullRepoName) where repo. represent a repository with a common prefix and it will scan if you have for example repositories like repo.core, repo.abstractions and so on and the others are the exact names of the other repositories that you are interested in. 

          Don't forget to set the "Days to keep old items" and "  Max # of old items to keep" to something like 999, like lordmatanza suggested above, so you won't loose any of your projects if you somehow mess up the regex.

          The "Repository name pattern" takes a regex, so you can come up with something more sophisticated than the one that I am using, but it does the job and I don't need to switch to github.

          Georgi Hristov added a comment - - edited Another workaround is to use the "Repository name pattern" option. You can set a regex like (repo. |fullRepoName|anotherFullRepoName) where repo. represent a repository with a common prefix and it will scan if you have for example repositories like repo.core, repo.abstractions and so on and the others are the exact names of the other repositories that you are interested in.  Don't forget to set the "Days to keep old items" and "  Max # of old items to keep" to something like 999, like lordmatanza suggested above, so you won't loose any of your projects if you somehow mess up the regex. The "Repository name pattern" takes a regex, so you can come up with something more sophisticated than the one that I am using, but it does the job and I don't need to switch to github.

          James Dumay added a comment -

          lordmatanza can you open a PR for your changes? We can get someone to take a look.

          James Dumay added a comment - lordmatanza can you open a PR for your changes? We can get someone to take a look.

          I have created the pull request and created a link from this issue.

          Github complains that the issue cannot be merged, but that is because my commit was based on an earlier commit from master, not the current head.

          In case you would consider merging the change I would be happy to modify my pull request so that it is based on the current head.

          Melvyn de Kort added a comment - I have created the pull request and created a link from this issue. Github complains that the issue cannot be merged, but that is because my commit was based on an earlier commit from master, not the current head. In case you would consider merging the change I would be happy to modify my pull request so that it is based on the current head.

          lordmatanza I disagree with the hacky solution you proposed in your PR.

          I do not currently have time to implement a rate limiting solution, but if you want to take a stab at it, you are more than welcome to try.

          Should probably be easy to localize within the Bitbucket Cloud client classes... I'm thinking something like:

          Stephen Connolly added a comment - lordmatanza I disagree with the hacky solution you proposed in your PR. I do not currently have time to implement a rate limiting solution, but if you want to take a stab at it, you are more than welcome to try. Should probably be easy to localize within the Bitbucket Cloud client classes... I'm thinking something like: Add fields to BitbucketCloudApiClien  that record the last rate limit header from the last response Modify the BitbucketCloudApiClient.getRequest(...)  and similar methods to both populate the last response rate limit header fields as well as sleep when the rate limit is over-target. (if you want a solid if probably over-engineered sleep algorithm, see Connector.java#L454-L540 )

          Melvyn de Kort added a comment - - edited

          stephenconnolly I agree with you on the hacky solution. I immediately stated that this fix would probably not be accepted.

          I would like to implement the solution you've suggested, but my time is currently limited as well. And since our current hacky solution works, it's difficult for me to spend time on the issue during work hours, even though our fix is far from a wanted solution.

          Maybe in the upcoming weeks I might get some time to implement your suggestion, but I can't make any promises.

          Let's update this issue when one of us (or somebody else) starts to work on this.

          Melvyn de Kort added a comment - - edited stephenconnolly I agree with you on the hacky solution. I immediately stated that this fix would probably not be accepted. I would like to implement the solution you've suggested, but my time is currently limited as well. And since our current hacky solution works, it's difficult for me to spend time on the issue during work hours, even though our fix is far from a wanted solution. Maybe in the upcoming weeks I might get some time to implement your suggestion, but I can't make any promises. Let's update this issue when one of us (or somebody else) starts to work on this.

          It seems that Bitbucket cloud does not have support to see what the current state of your rate limits are.

          This is the page that explains the current limits, which is very minimal information.

          I've found a developer support question here where an Atlassian team member explains that there currently is no way to predict when rate limiting will occur.

          Therefore I've created a simple fix in the BitbucketCloudApiClient.getRequest(...) which does a sleep and a retry when status code 429 is returned.

          I've created a new pull request and linked it to this issue.

          Melvyn de Kort added a comment - It seems that Bitbucket cloud does not have support to see what the current state of your rate limits are. This is the page that explains the current limits, which is very minimal information. I've found a developer support question here  where an Atlassian team member explains that there currently is no way to predict when rate limiting will occur. Therefore I've created a simple fix in the BitbucketCloudApiClient.getRequest(...)  which does a sleep and a retry when status code 429  is returned. I've created a new pull request and linked it to this issue.

          The PR was accepted, this issue can be resolved/closed.

          Melvyn de Kort added a comment - The PR was accepted, this issue can be resolved/closed.

            amuniz Antonio Muñiz
            nossnevs Mikael Svensson
            Votes:
            7 Vote for this issue
            Watchers:
            12 Start watching this issue

              Created:
              Updated:
              Resolved: