-
Bug
-
Resolution: Fixed
-
Blocker
-
None
-
Jenkins ver. 2.138.2
-
Powered by SuggestiMate
Sometimes, when we create PR on GitHub - it is ignored by Jenkins. I've tried to re-scan organization - and see in logs:
Getting remote pull requests... Checking pull request #128 ‘Jenkinsfile’ not found Does not meet criteria
But Jenkinsfile is here! I see other bugs in your Jira and here is some additional information:
- Repository is added to Jenkins via GitHub Organization Folder
- We don't use forks
- Commiter is a member of our organization and has full rights on this repository
- Problem PRs are also very simple and doesn't touch Jenkinsfile at all
- Source PR branch has Jenkinsfile
- Branch which we want to merge - was created from source branch and also has Jenkinsfile
- Jenkinsfile was not changed between merge commits
- If I merge branches manually - result has Jenkinsfile
- If we do merge on GitHub UI - result will have Jenkinsfile (and will be built)
- If we recreate same pull request several times - it will be built
And more logs:
Examining our-organization/our-repo: Checking branches... Getting remote branches... Checking branch master ‘Jenkinsfile’ found Met criteria No changes detected: master (still at 923197f48be5cd8296b8ca95bd72a4a830a474f4) Checking branch develop ‘Jenkinsfile’ found Met criteria No changes detected: develop (still at 43c2cce36623a4af90b28d886dfb28ea8d813ab8) Checking branch feature/198-video-verification ‘Jenkinsfile’ found Met criteria No changes detected: feature/198-video-verification (still at 9126f0c6958db3c712078ed0d2587e96004d27c6) 3 branches were processed Checking pull-requests... Getting remote pull requests... Checking pull request #128 ‘Jenkinsfile’ not found Does not meet criteria
PR #128 - is between develop and feature/198-video-verification - and, as you can see, both have Jenkinsfile and were built successfully.
- is duplicated by
-
JENKINS-54403 Job fail when GitHub tag was recreated
-
- Closed
-
- is related to
-
JENKINS-60353 Non-mergable Pull-request Contents Get into GH Cache with False-Positive 404
-
- Closed
-
-
JENKINS-57206 Support draft pull requests from GitHub
-
- Closed
-
-
JENKINS-57411 Upgrade github-branch-source-plugin to use okhttp3
-
- Closed
-
- links to
[JENKINS-54126] Jenkinsfile not found in PR on GitHub -- Does not meet criteria
awiddersheim I have tried setting the JVM property but that does'nt create the new job config for each branch source. Hence, the Jenkinsfile changes are ignored though a scan on repository picks the change in commit SHA Is there any alternative?
Branch source plugin is creating a stale job directory with the current builds folder job source. Appears like the fingerprints help to keep duplictaes on jenkins.
I can confirm that -Dorg.jenkinsci.plugins.github_branch_source.GitHubSCMSource.cacheSize=0 doesn't help. Only cronjob which removes cache directory every one minute did the trick.
If you turn off the cache and clear the cache dir does the problem come back?
Removing Stephen as the owner. vivek, FYI. I believe you are already aware.
stephenconnolly rsandell As reported above disabling cache (using -Dorg.jenkinsci.plugins.github_branch_source.GitHubSCMSource.cacheSize=0) doesn't help, removing the caching directory does. Perhaps, if caching is disabled, we should cleanup the caches as well in fail safe manner?
I am guessing, might be the case of cache is not getting correctly invalidated resulting in such behavior.
As a workaround, is it safe to nuke $jenkins_home/org.jenkinsci.plugins.github_branch_source.GitHubSCMProbe.cache while jenkins is running as a workaround? Would this potentially cause us to miss any events?
npwolf On running Jenkins, it might. Well, you have to disable the cache anyways, that needs setting this jvm property -Dorg.jenkinsci.plugins.github_branch_source.GitHubSCMSource.cacheSize=0.
We have hit this issue several times recently, and we'd really like to get something in place to mitigate it ASAP, even if it's a temporary measure. I would appreciate it very much if anyone here could clarify any of the following points:
- When will rsandell's fix be available? (It looks like the newest currently-available version of the branch source plugin, 2.4.3-beta-1, was built on January 28th, while that PR was merged in late February)
- soar seems happy with a cron job that blows away the cache every minute, while vivek suggests that nuking the cache while Jenkins is running could cause issues. What kinds of issues might be caused by disabling the cache in a running system?
- Several people seem to have concluded that setting the cacheSize=0 JVM property does not help, but in the most recent comment in this thread, Vivek says that, "you have to disable the cache anyways, that needs setting this jvm property". Has anyone actually been able to mitigate this issue by setting that property?
Thanks!
Yes, adding the JVM argument to disable the cache did the job for me.
Here are the steps we are using to reproduce the issue:
- create a branch
- commit something
- create a PR
- merge the PR with master
- delete the branch on Github enterprise ui.
- create a branch with the same name from master, (the branch name is the same as the previous one, but that's ok since we deleted already from the github ui).
- now if we create another PR using this branch, even it shows up properly on the github UI, a scan in Jenkins shows no Jenkinsfile found thus the new PR could not be built.
Suffering from the same issue here.
Jenkins core 2.164.1
GitHub branch source 2.5
We use a pipeline script to clear the cache whenever there is an issue.
node('master'){ stage('Clear cache'){ sh "rm -r $jenkins_home/org.jenkinsci.plugins.github_branch_source.GitHubSCMProbe.cache" } }
It's not ideal, but hopefully someone else finds it useful.
This breaks the plugin for anyone doing PRs. Really shouldn't be any further releases until this is fixed.
We have rolled back for now and are opting out of upgrades.
dbsanfte
What did you roll back to that "fixed" this issue for you? Could you provide same additional details about your configuration and what output you're seeing?
dbsanfte amirbarkal
There is a new version v2.5.1 the fixes PRs from forks.
synalogik, amirbarkal, dbsanfte, soar
If anyone is interested in trying it, I have an experimental version of the github-branch-source-plugin using okttp3.
Removed
If you have a test/staging Jenkins on which your are comfortable trying this out it would be helpful to know if this issue would be fixed by this upgrade.
The code can be seen at https://github.com/jenkinsci/github-branch-source-plugin/pull/223
bitwiseman maybe we can run this at least temporarily on ci.jenkins.io? I know rtyler has a strict “no experimental code” rule for this server, but the current situation is badly broken to begin with.
Started by timer
Started by timer
java.lang.NoSuchMethodError: okio.BufferedSource.readUtf8LineStrict(J)Ljava/lang/String;
at okhttp3.internal.http1.Http1Codec.readHeaderLine(Http1Codec.java:215)
at okhttp3.internal.http1.Http1Codec.readResponseHeaders(Http1Codec.java:189)
at okhttp3.internal.http.CallServerInterceptor.intercept(CallServerInterceptor.java:88)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at okhttp3.internal.huc.OkHttpURLConnection$NetworkInterceptor.intercept(OkHttpURLConnection.java:666)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:45)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:93)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:126)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at okhttp3.internal.huc.OkHttpURLConnection$UnexpectedException$1.intercept(OkHttpURLConnection.java:600)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:250)
at okhttp3.RealCall.execute(RealCall.java:93)
at okhttp3.internal.huc.OkHttpURLConnection.getResponse(OkHttpURLConnection.java:472)
at okhttp3.internal.huc.OkHttpURLConnection.getResponseCode(OkHttpURLConnection.java:509)
at okhttp3.internal.huc.DelegatingHttpsURLConnection.getResponseCode(DelegatingHttpsURLConnection.java:106)
at okhttp3.internal.huc.OkHttpsURLConnection.getResponseCode(OkHttpsURLConnection.java:26)
at org.kohsuke.github.Requester.parse(Requester.java:615)
at org.kohsuke.github.Requester.parse(Requester.java:607)
at org.kohsuke.github.Requester._to(Requester.java:285)
at org.kohsuke.github.Requester.to(Requester.java:247)
at org.kohsuke.github.GitHub.checkApiUrlValidity(GitHub.java:744)
at org.jenkinsci.plugins.github_branch_source.Connector.checkApiUrlValidity(Connector.java:329)
at org.jenkinsci.plugins.github_branch_source.GitHubSCMSource.checkApiUrlValidity(GitHubSCMSource.java:1377)
at org.jenkinsci.plugins.github_branch_source.GitHubSCMSource.retrieve(GitHubSCMSource.java:1430)
at jenkins.scm.api.SCMSource.fetch(SCMSource.java:582)
at org.jenkinsci.plugins.workflow.multibranch.SCMBinder.create(SCMBinder.java:98)
at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:293)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Finished: FAILURE
I don't understand your question. We use the organization plug-in discover repositories and branches. This beta plug-in broke all of our builds immediately.
I observed behavior that Jenkins will ignore PR if that PR has conflicting files.
jsoref, synalogik, amirbarkal, dbsanfte, soar
Here's the updated plugin with okio (thanks Josh!).
Removed
armantur
Could you open a new issue for that, it is probably not related to this issue.
jsoref
Updated again. I manually tested it this time on my local jenkins. Builds run.
https://repo.jenkins-ci.org/incrementals/org/jenkins-ci/plugins/github-branch-source/2.5.3-rc841.52a0b90bff37/
bitwiseman: that one works!
I'll leave it running. And if anything explodes, I'll let you know. Otherwise, I look forward to finally upgrading to a working plugin.
Thanks
jsoref
Okay, so to be clear this is still an experiment/exploration. The point of this exploration is to see if switching to using okhttp3 will address the FileNotFound issues. We're going to need more reports before we can release it.
synalogik, amirbarkal, dbsanfte, soar (and anyone else)
Please give this version of the plug a try: https://repo.jenkins-ci.org/incrementals/org/jenkins-ci/plugins/github-branch-source/2.5.3-rc841.52a0b90bff37/
bitwiseman
Not really sure how to reproduce this - can you post instructions?
We’re using Multibranch pipeline repository with GitHub branch source configured to build PRs, branches and tags.
So, we aren't currently hitting the
‘Jenkinsfile’ not found
problem, but we are seeing that the periodic scans don't seem to be running periodically, and a branch whose name has been recycled (but for whom we'd expect github to send a notification) isn't being noticed automatically.
We are able to use "Scan Repository Now" to pick up those branches, which feels like progress.
jsoref
Let's talk about an issues you see with okhttp3 version as a separate issue:
https://issues.jenkins-ci.org/browse/JENKINS-57411
We sort of hijacked this issue as something that might be fixed by the okhttp3 upgrade. If you see this issue again, it should go here, if you see other issues that should go there. Sound good?
soar, scotje, jcollard, jsoref, dbsanfte, aedwards, synalogik, npwolf
We're are trying to test whether this underlying issue could be fixed or improved by moving to okhttp3. If you are willing to help, please try the patched version of the plugin shown in the description of https://issues.jenkins-ci.org/browse/JENKINS-57411 . Thanks.
I tried 2.5.4-rc849.b58a1bae7fce, it did not fix the issue for me.
I had been running builds on the branch (without a PR up) with no problem. Then as soon as I made a PR for the branch, Jenkins lost track of the branch and PR because it could no longer see the Jenkinsfile on scan. I had made the PR after merging master, and resolving conflicts in the Jenkinsfile.
Deleting the cache (as above), also fixed this for me.
Workaround that worked for me:
Groovy config - for a container
org.jenkinsci.plugins.github_branch_source.GitHubSCMSource.cacheSize=0
And
GitHubServerConfig server = new GitHubServerConfig("my_github_credential_API"); server.setApiUrl(githubAPIurl) server.setClientCacheSize(0) //<--See https://issues.jenkins-ci.org/browse/JENKINS-54126. Had to disable this one too to get this to stop happening
Once both of those are configured, delete the directory:
$jenkins_home/org.jenkinsci.plugins.github_branch_source.GitHubSCMProbe.cache
Now that directory is not getting re-populated. Not sure why 2nd code block was required, but for what its worth, to the next person.
Hello All -
I had this same issue with the Jenkinsfile was being discovered during the scan. After deleting the cache file specified and re-scanning it worked perfectly.
thomasliddledba
It is good to know the workaround still works. Thanks!
This issue occurred on one our branch today. On login to Jenkins server I noticed entire branch was missing under jobs folder. We use Multipipeline project and 2.164.2, this is so annoying as we lost entire history of PR(s) that went to this branch. We map build number to PR to do feature testing and now we don't have that details.
We encountered this problem today, we have:
Jenkins v2.176.2
Branch API Plugin v2.5.4
GitHub Branch Source v2.5.5
Multijob plugin v 1.32
Pipeline: Multibranch 2.21
deleting the files under $JENKINS_HOME/org.jenkinsci.plugins.github_branch_source.GitHubSCMProbe.cache resolved the problem. This isn't a viable longterm solution though.
Steps to reproduce:
1. Create a branch testbranch that is one commit behind the tip of master (or some branch that is to be merged into)
2. Make a commit to testbranch that will cause a conflict with the latest on master.
3. In GitHub, create a pull-request for testbranch to be merged into master. (It will warn you that it can't be automatically merged, but "don't worry, you can still create the pull-request". (Jenkins Multibranch pipeline will refuse to create a build for both the branch and the PR).
4. Rebase testbranch off the latest on master, resolve the conflicts, and git push origin testbranch --force.
5. Jenkins Multibranch pipeline will catch that the branch changed, and build the branch, however, it still doesn't sense the pull-request, and refuses to create a build for it.
Closing and re-opening the pull-request doesn't help.
Closing the pull-request and opening a new one (for the same, conflict-free branch) does work (but is obviously not ideal).
rm -rf $JENKINS_HOME/org.jenkinsci.plugins.github_branch_source.GitHubSCMProbe.cache/* does work.
I'd also like to state that there indeed was a Jenkinsfile on testbranch, and that the trigger of this bug is when there's a merge conflict at the time the pull-request is created.
Jenkins 2.176.2
GitHub Source Plugin 2.5.6
Pipeline: Multibranch 2.21
We're seeing this all the time on PRs that use the draft PR feature in GitHub. I found a related issue reported for it already:
Seeing this happen still,
Jenkins 2.190.1
Github Branch Source Plugin. 2.5.8
Pipeline: Multibranch: 2.21
This has been happening from time to time for us and now again today.
I did notice in a related/linked issue, https://issues.jenkins-ci.org/browse/JENKINS-57206, that this might be related to the webhook from github setting:
"mergeable_state": "unknown",
And I can confirm that it is what was sent in our case. And to recover we had to delete the cache at `$JENKINS_HOME/org.jenkinsci.plugins.github_branch_source.GitHubSCMProbe.cache/*` as it would not recover by re-scanning nor retriggering the webhook.
jordanjennings - That is a completely separate issue.
Would you be willing to try the OkHttp3 update in JENKINS-57411 ? It will not fix the issue while it is happening, but we are hoping it may prevent or reduce the occurrences, but we need more people that have actually seen the issue to try the fix.
Hey, bitwiseman
This is happening very rarely for us, but it is annoying when it happens
I can see about running that OkHttp3 update, it seems to be a patch version behind though? Are there any drawbacks from installing the OkHttp3 version?
As a side note, It might be related, sometimes when we add a github repo and the credentials provided in Jenkins don't have access we seem to run into a cache-ing issue as well.
So we add the Jenkins user to the Github repo with Admin access, but rescanning the pipeline still fails with invalid credentials error. Also the "Validate" function in the Job configuration will output `Error: Credentials Ok`
This can be worked around by changing the case of any letter in the repo url in the Job configuration. (for example change `https://github.com/user/repo` to `https://github.com/User/repo`)
I haven't tried deleting the Github Branch Source cache but I'm guessing that would also solve the issue, I will try that next time instead of the above mentioned workaround.
+1 for the issue that was annoying for so long now, although most often seen on real branches (masters and releases of our project that are really providing a Jenkinsfile)
Yes we do use GitHub caching, advocated for it to appear, and hope it stays with the poor internet uplink we have, and with github REST API quotas for uncached requests abound, and no possibility to get hooks thus requiring polling, - our farm couldn't really work without it.
So I set out digging in the data, and found that cached HTTP-404s in *.0 files correlate with very short *.1 files (the compact error message from Github REST API), so selecting those to look deeper:
:; find . -name '*.1' -size 1 | sed -e 's,^./,,' -e s',.1$,,' | while read F ; do egrep 'HTTP.*404' "$F.0" >&2 && echo "=== $F" && head -1 "$F.0" && ls -la "$F"* ; done
Due to reasons unknown however, the cached response for some of the URLs is HTTP/404 even with a valid JSON in the (gzipped) `hashstring.1` file:
{"message":"No commit found for the ref refs/heads/4.2.0-FTY","documentation_url":"https://developer.github.com/v3/repos/contents/"}
and the corresponding `hashstring.0` file looks like:
:; cat fbe7227813e6f1a6bbb2f1e5202a84a2.0 https://api.github.com/repos/42ity/libzmq/contents/?ref=refs%2Fheads%2F4.2.0-FTY GET 1 Authorization: Basic NDJpdHktY2k6NjA5MDk2YTVmNzNhNTc1YzE1OWYxZjI3NDJlZmI1YjhiMTQzZmIzMw== HTTP/1.1 404 Not Found 31 X-OAuth-Scopes: admin:repo_hook, public_repo, repo:status, repo_deployment X-Accepted-OAuth-Scopes: X-GitHub-Media-Type: github.v3; format=json Content-Encoding: gzip Transfer-Encoding: chunked Connection: keep-alive Content-Type: application/octet-stream X-Cache: MISS from thunderbolt.localdomain X-Cache-Lookup: MISS from thunderbolt.localdomain:8080 Via: 1.1 thunderbolt.localdomain (squid/3.4.4) Server: GitHub.com Date: Thu, 28 Nov 2019 00:41:22 GMT Status: 304 Not Modified X-RateLimit-Limit: 5000 X-RateLimit-Remaining: 5000 X-RateLimit-Reset: 1574905280 Cache-Control: private, max-age=60, s-maxage=60 Vary: Accept, Authorization, Cookie, X-GitHub-OTP ETag: "2513f4bbc2abb8b63adbec8336a82810a4fb5dc5" Last-Modified: Wed, 05 Dec 2018 10:54:24 GMT Access-Control-Expose-Headers: ETag, Link, Location, Retry-After, X-GitHub-OTP, X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset, X-OAuth-Scopes, X-Accepted-OAuth-Scopes, X-Poll-Interval, X-GitHub-Media-Type Access-Control-Allow-Origin: * Strict-Transport-Security: max-age=31536000; includeSubdomains; preload X-Frame-Options: deny X-Content-Type-Options: nosniff X-XSS-Protection: 1; mode=block Referrer-Policy: origin-when-cross-origin, strict-origin-when-cross-origin Content-Security-Policy: default-src 'none' X-GitHub-Request-Id: F066:2FC4:FE494:24E933:5DDF17B1 OkHttp-Sent-Millis: 1574901681913 OkHttp-Received-Millis: 1574901682110 TLS_RSA_WITH_AES_128_GCM_SHA256 2 MIIECDCCAvCgAwIBAgIUEG8XFkmTLxiL4iPSXqLddY7e6AswDQYJKoZIhvcNAQEFBQAwga0xCzAJBgNVBAYTAkNaMRcwFQYDVQQIDA5QcmFndWUgc3VidXJiczEQMA4GA1UEBwwHUm96dG9reTENMAsGA1UECgwERUVJQzERMA8GA1UECwwIQklPUyBMQUIxJDAiBgNVBAMMG3RodW5kZXJib2x0LnJvei5sYWIuZXRuLmNvbTErMCkGCSqGSIb3DQEJARYcRWF0b25JUENPcGVuc291cmNlQEVhdG9uLmNvbTAeFw0xOTA3MDgwMDAwMDBaFw0yMDA3MTYxMjAwMDBaMGgxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQHEw1TYW4gRnJhbmNpc2NvMRUwEwYDVQQKEwxHaXRIdWIsIEluYy4xFTATBgNVBAMMDCouZ2l0aHViLmNvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAKIFH+JTppW1pvbrqnLU1SCYOsFsI6vdoL66M/497v413h1TOEwGWEo1wvZq3YhD65VSlxrsEj7xGd+ZUy2/mzRh2XmGRolJUWd/XKCQ+lJukRLX3BYhRBXfGK9Njv/afR1OIs96A4dTZA7PpPwC5Gvk34iTcJe4gilud//3UqD55A0jk+uEwQqosAImeGQg4Ayqo3K5rR+NhF8NnR7kXT1Cijk6jySbgX5Lhu8FPu7LdiPntxjuvFNJNaRy+6t4PxHJ1iRRlDdsVHyZMcZGb8klafrKsr7kLBWSMKiVaXTdlNc26bUOctH+LySlZB6Q7LgSec3MBqXZBFk0AzfwxPcCAwEAAaNkMGIwIwYDVR0RBBwwGoIMKi5naXRodWIuY29tggpnaXRodWIuY29tMA4GA1UdDwEB/wQEAwIFoDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADANBgkqhkiG9w0BAQUFAAOCAQEAocIF+SVNlLFzWv0A/OUu4TG+aRBdzplMrF6Gy8JxwBSp22SB1PD2H71R5bi4U7UA3vgnpLbyg283XhZndNern1rIf49XXTqFbPC1xcZi85NcYc6xE18pnO0GQRaVgple2MOZXrn32FPgV2Zn/5XxGlQU1eL8leLc8tvMZkokmuBWRkuvCkx7xM5YMSAo4lRsL6zqzio/RLTOqWP1d6qSsGsf3Zc4HJ5RUTeA2QnyO1TRVvO+8bo5rQUHBOVmYhc006zs35LsjaUhG/6R1POZW2OS55U8ArQgLE/dZZV9mNJsTdd2hefv3v0+/whB+Y3stiO7zDMVFIOoHEd0+cUfGg== MIIELzCCAxegAwIBAgIJAOz23xAU+F0TMA0GCSqGSIb3DQEBCwUAMIGtMQswCQYDVQQGEwJDWjEXMBUGA1UECAwOUHJhZ3VlIHN1YnVyYnMxEDAOBgNVBAcMB1JvenRva3kxDTALBgNVBAoMBEVFSUMxETAPBgNVBAsMCEJJT1MgTEFCMSQwIgYDVQQDDBt0aHVuZGVyYm9sdC5yb3oubGFiLmV0bi5jb20xKzApBgkqhkiG9w0BCQEWHEVhdG9uSVBDT3BlbnNvdXJjZUBFYXRvbi5jb20wHhcNMTgwNDAzMTIxNzU2WhcNMjgwMzMxMTIxNzU2WjCBrTELMAkGA1UEBhMCQ1oxFzAVBgNVBAgMDlByYWd1ZSBzdWJ1cmJzMRAwDgYDVQQHDAdSb3p0b2t5MQ0wCwYDVQQKDARFRUlDMREwDwYDVQQLDAhCSU9TIExBQjEkMCIGA1UEAwwbdGh1bmRlcmJvbHQucm96LmxhYi5ldG4uY29tMSswKQYJKoZIhvcNAQkBFhxFYXRvbklQQ09wZW5zb3VyY2VARWF0b24uY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAogUf4lOmlbWm9uuqctTVIJg6wWwjq92gvroz/j3u/jXeHVM4TAZYSjXC9mrdiEPrlVKXGuwSPvEZ35lTLb+bNGHZeYZGiUlRZ39coJD6Um6REtfcFiFEFd8Yr02O/9p9HU4iz3oDh1NkDs+k/ALka+TfiJNwl7iCKW53//dSoPnkDSOT64TBCqiwAiZ4ZCDgDKqjcrmtH42EXw2dHuRdPUKKOTqPJJuBfkuG7wU+7st2I+e3GO68U0k1pHL7q3g/EcnWJFGUN2xUfJkxxkZvySVp+sqyvuQsFZIwqJVpdN2U1zbptQ5y0f4vJKVkHpDsuBJ5zcwGpdkEWTQDN/DE9wIDAQABo1AwTjAdBgNVHQ4EFgQUAf/vfDxEB9kv3Cfo9fb3ikvyWNswHwYDVR0jBBgwFoAUAf/vfDxEB9kv3Cfo9fb3ikvyWNswDAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEAlwBAM+b+mxtzgP+Q5AFWzLqj2TwSWXERGNnZQFDVeoZXb2y7UqaAf+Dz8WvTOrn51/fE5jsyqYHUCBXucbFJIuFx4G7vhsspcraIgenTGoP5N4L2UamrEkrqBl1CkYVhP2aykdA9G2Tu/61/rHMNycuLCf/CrZA54QlVQ8M8KtAQo+CEKcGeDBabP4TOtWvPO7ScM9kj5vRTiwy0DaVIL2VaNWLsdqT9tQ8e01wB1CRtjBFb1lhr3zMT0wXF8gAA9zcL6h1/1yiD5lNFKYUTKtsAuLpNb51lUq1k8eshyqiCHMrSm9/nj4L1WcWSiiR4MxvU2DTGUmwrKJ6Z3tf1Xw== 0
It seems that a large portion of such files appeared Jul 22 between 15:45-16:30 UTC so maybe there was an outage of GitHub at that time... there were a few this year. The few other short files apparently point to scans/builds of recently merged PRs so the ephemeral branch is really not there.
UPDATE: https://t.co/cFs8GfdpVV marks it at 15:46
For reasons unknown, the "Date:" timestamp in the .0 header file is fresh, probably from the last scan; the result and content on-disk remain unchanged. Manually submitted requests through same proxy do return expected contents of the Git branch (wrapped into GitHub's REST API JSON markup). Probably the client did submit the cached Etag, maybe with object timestamp, and Github confirmed the cached value is still valid (except due to that hiccup it isn't).
Possibly the sort-of-fix would be to set up an optional timeout for cached (negative only?) responses so eventually they are retried. Or making a forced option to Branch indexing/MBP rescan/SCM Polling/... so that the manually issued request is done not-cached (for all or negative cached replies) so updating the cache with real current replies as if from scratch.
Previously tried forcing the job configs to be not-disabled (via on-disk XMLs and reload of Jenkins configuration), this got the jobs not-marked with gray balls in the dashboard... but then they were re-marked probably due to this cache issue. For our OrgFolders making MultiBranch pipelines, the half-successful magic looked like this:
:; for D in /var/lib/jenkins/jobs/*/jobs ; do ( cd "$D" && for F in */branches/*/config.xml ; do sed 's,<disabled>true</disabled>,<disabled>false</disabled>,' -i "$F" ; done ); done
We have determined that issue is being caused by a bug in the GitHub API. The problem is described in https://github.com/github-api/github-api/pull/669 .
The linked PR (https://github.com/github-api/github-api/pull/665) now shows a workaround that should work for all scenarios. The workaround will only execute when actually needed and will occur without caller of the github-api library knowing about it.
I'll do the work to upgrade this dependency (already in progress) and this problem will go away. This is my top priority for the coming week.
jglickYes, I've reported it to github via a support ticket. I have not heard back from them beyond an automated response.
The fix for this issue has been merged and will be released in the next day or two in github-branch-source v2.6.0.
If you want to try it out now:
install github-api-plugin 1.6.0
and then install the hpi from: https://repo.jenkins-ci.org/incrementals/org/jenkins-ci/plugins/github-branch-source/2.5.9-rc1028.3059575bf1cc/
bitwiseman AFAICT with just github-api-plugin 1.106 the problem should go away? Of course it might do something less optimal
Thanks for fixing this bitwiseman! It's been a long time frustration.
I wonder if it has anything to do with the cache just being recently turned on again by default here:
https://github.com/jenkinsci/github-branch-source-plugin/commit/1b3a370d78a4f8b431a55bc79ee795f1d8cece88
The time when people started reporting this issue and when that changed happened in early October seem to line up.
https://wiki.jenkins.io/display/JENKINS/GitHub+Branch+Source+Plugin
The cache can be disabled by setting -Dorg.jenkinsci.plugins.github_branch_source.GitHubSCMSource.cacheSize=0. My guess is that will serve as a decent work around seeing as the cache wasn't on prior and shouldn't have any adverse affects AFAIK.