-
Bug
-
Resolution: Fixed
-
Minor
-
Powered by SuggestiMate
Using a JobDSL script to generate a multibranch Pipeline job does not trigger a branch index to find Jenkinsfile
Here is a simple dsl job
multibranchPipelineJob(repo) { branchSources { github { scanCredentialsId(credentials) repoOwner(credentials) repository(repo) } } }
Using this created the job fine but did not trigger a branch scan until I manually triggered a branch index. It also works if you open the multibranch job configuration and save it with no changes.
Creating a multibranch job directly from the UI works fine.
The only way I can trigger a branch index is to add a triggers section to the command to periodically scan every minute. I then had to create 3 build steps:
1. JobDSL to create mutlibranch Pipeline job with a trigger set to 1 minute
2. Shell step to Sleep for 60 seconds
3. JobDSL to modify the multibranch Pipeline job and turn off the trigger.
- is related to
-
JENKINS-38887 support launching multibranch pipeline (branch indexing)
-
- Closed
-
- links to
[JENKINS-39682] Generated Multibranch Pipeline Job does not index branches
Job DSL uses standard Jenkins API to create jobs, specifically Jenkins#createProjectFromXML(String name, InputStream xml). If the job does not trigger after creating from Job DSL, it will probably also not trigger after creating through CLI or REST API which use the same API internally. This should not be fixed at the consumer side of the API. It needs either to be fixed in core or in Multibranch Pipeline plugin.
We worked around this by immediately queuing the job initially - something like this:
def exists = jenkins.model.Jenkins.getInstance().getItemByFullName(jobPath)
if (!exists) {
queue(jobPath)
}
There are a few fixes for Multibranch pipeline in recent version of Job DSL, especially JENKINS-43693. Can someone verify that the problem persists with Job DSL 1.65? Otherwise I will close this issue.
Im using 1.66 and my multibranch jobs don't get re-indexed after updating. In fact, because of this, I'm running into https://issues.jenkins-ci.org/browse/JENKINS-40862.
A unique id has to be assigned to each branch source, otherwise indexing does not work as expected.
multibranchPipelineJob('example') { branchSources { github { id('3948734') scanCredentialsId('github') repoOwner('jenkinsci') repository('job-dsl-plugin') } } }
I'm having an issue that matches this with Bitbucket multi branch pipeline jobs created from the Job DSL. The credentials seem to be invalid, even though they are set correctly. After running the seed job, the indexing will fail with an authorization failure against Bitbucket. If I simply open the configuration for the job in the web UI, and save it with no changes, it will work from that point forward. It also shows a red failure icon with "Not found" for the Owner (Project in BB) until I hit save. Is this the same issue? It's unclear from the comments/lack of resolution. Are there possibly other already known issues that match this?
I also faced this issue. I end up adding a build step on my job :
1st step: Creation of the pipeline if it do not exist:
hudson.model.Item i=jenkins.model.Jenkins.instance.getItemByFullName("$folder/$name") if (i!=null){ throw new javaposse.jobdsl.dsl.DslException("$folder/$name already exist") } multibranchPipelineJob("$folder/$name") { description("") ... branchSources { branchSource { source { github { id(UUID.randomUUID().toString()) ... } } strategy { ... } } } factory { workflowBranchProjectFactory { scriptPath("Jenkinsfile") } } }
Then on another step, I execute the post action that are not triggered :
println("Initializing job") hudson.model.Item i=jenkins.model.Jenkins.instance.getItemByFullName("$folder/$name") println("Job found: "+i.getDisplayName()) i.save() i.getSCMSources().get(0).afterSave() queue("$folder/$name") println("Inializing done")
I save the job (I'm not sure it is needed, but it make the configuration file much like it is if I do it manually) then I trigger the "afterSave()" method on the SCM source (In github use-case, it adds the Webhook on the repo ) and finally I trigger a build of the multibranch so that branch are scanned.
JobDSL is generic, so I'm not sure one day it will be able to do that, but until then, this will do the trick .... I hope ...
The problem is IMHO, that the MultiBranchProject will do the necessary updates only after the config page has been saved (Stapler submit):
But it should also implement ItemListener and do the updates in onUpdated, onCreated, etc to allow to create a multi-branch project through API (e.g. ModifiableTopLevelItemGroup.createProjectFromXML or AbstractItem.updateByXml).
This issue does not only apply to Job DSL, but also to other consumers of that API, maybe CLI and REST API.
daspilker does this pull request also fix the issues reported in JENKINS-43693 and JENKINS-57235?
At first sight it might do so. This would be really great.
si did that resolve your issue? I got hit by the exact same thing on my project. I tried gregoirew's suggestion but that didn't work either...
PR https://github.com/jenkinsci/branch-api-plugin/pull/158 was merged to fix this issue but has caused a worse issue in my opinion.
My seed job runs every 4 hours, it scans repositories in Bitbucket and create and/or update hundreds of jobs as a result.
After updating the Branch API plugin to 2.5.5 with this PR included, now every 4 hours, ALL my ~ 250 jobs are triggering a branch scanning at the same time after seed job completes.
This causes several API calls to Bitbucket all at the same time and then they hang, as Bitbucket has a limit on number of API calls.
That means I have to revert the plugin to 2.5.4 as its impossible to continue with this behaviour.
I understand what you are trying to fix here - to run it initially when the job gets created for the first time. But it should not happen when the job gets updated.
Also, using a scan trigger fixes the problem of initial branch scanning and it is recommend to use one anyway (as sometimes Webhooks will be missed). It has never being a problem for us.
Anyway, I think this has made things worse at the point of being a blocker.
Thanks.
Triggering scan is not always possible.
For instance you speak about 250 repos let say around 15 branches per repos that will quickly kill your api quota.
On large orgs, you do that exceptionally and more per repos than at full scale
I am not sure I understood well what you mean gregoirew.
The issue here is that my seed job creates/updates many jobs every few hours. That was working ok before and now after this change it started re-scanning all of the jobs at once.
Do you agree this is a problem or what are you suggesting?
You were suggesting to enable the index scan trigger to prevent the initial scan.
I was just pointing out that it is not always possible as it would result jenkins to make a lot of api call from time to time, that would result in the exact same issue.
But you can configure the the trigger to be every day or every week etc. They go randomically and then the API calls are spread and don't cause issues.
This is what I am using now:
triggers {
periodic(24 * 60) //every 24 hours
}
This does not mean that all jobs will trigger the scan every day at the same time. Unless I am wrong they are spread randomically and happen separately.
Also, another suggestion is do what has being done here but only if the job has just being created for the first time, then you trigger the scan, but not on a job update.
michelzanini triggering a scan is required on job update too as different things might have changed, e.g. the repository, different behaviours (looking for branches and/or pull request and/or tags, filtering of heads and so on...).
renescheibe, they might have changed but might NOT as well. Scanning it on every update is expensive because Bitbucket has limits on API calls.
At least, at minimum, what could be done is to trigger the scan in the future at a random time. For example, choose a random time in a space of 1 hour from now and run at that time. That way the calls will be spread out. Maybe this period could even be configured. That's how periodic triggers work. Actually, again, maybe it is not needed to have it on update. Because if you have it on create, and use a periodic trigger, it should be fine as eventually the job will be corrected.
At the moment, with the change that has being made, its impossible to update several jobs at the same time. There is no flag, nothing I can do to disable this. So its unusable.
michelzanini we also found that issue that all our ~ 800 repos was reindexed every night (ofc one day is not enough to reindex all of them with GH API limits)
But I found way how to fix it..
See https://github.com/jenkinsci/job-dsl-plugin/blob/master/job-dsl-plugin/src/main/groovy/javaposse/jobdsl/plugin/JenkinsJobManagement.java#L456
If your configuration is not changed then updateByXml is not called, so no reindexing is done..
In our case it was even harder to catch cause jobs itself did not changed but folders did (we have some portlets there that have random ID by default)
We changed them to static IDs and problem is gone..
-I have the same problem as phlogistonthegr8 despite the suggested fix by gregoirew.
The repository scan fails with: "FATAL: Invalid scan credentials when using <valid credential name>"
This can be fixed manually by navigating to the job's configuration page and saving the job configuration without any changes.
However, it is not feasible for the Jenkins instance that I'm managing to do this for every job, as we create a large number of these jobs through Job DSL.
I tried to looked through the API for some of the involved classes/interfaces, but haven't found anything suitable.-
def allJobs = Jenkins.instance.getAllItems(); for(item in allJobs) { if(item.getClass().equals(org.jenkinsci.plugins.workflow.multibranch.WorkflowMultiBranchProject)) { item.save(); item.getSCMSources().get(0).afterSave() } }
-Unfortunately, this did not work.
Is there any solution or workaround for this?
--
Used plugins: Branch API Plugin 2.5.8, Github Branch Source Plugin 2.8.3-
Edit: The APIUri in my Job DSL definition was set incorrectly. When Github Enterprise is used it has to end with "/api/v3"
Almost a year after, and I am still locked to branch-api version 2.5.4 to avoid the issues caused by this change.
I discovered my problem is related to folders, but, unlike what has been described by ingwar, it's seems to be related to having Views inside folders.
Apparently, if you have views inside folder using job-dsl, they always cause the folder XML to change, and this triggers cascading builds for everything inside those folders.
That means every time my Seed Job runs it re-runs hundreds of jobs. I tried everything, but with no luck. The only thing I can do is abandon the Views inside folder and give up on having them. The XML generated by my folders always have the same content, and still it's triggering builds.
I think this should be changed to avoid re-building jobs inside folders. It also could somehow be improved, maybe on job-dsl to avoid re-building a project if little has changed. Or at least, avoid re-building all at once.
I am going to try to create an issue on job-dsl to see if someone can do something about this. It's definitely a pain.
I created this issue https://issues.jenkins-ci.org/browse/JENKINS-63344 for my problem on job-dsl.
Thank you for opening a new issue, I've closed this one.
I did this only because the scenario you're describing is not the scenario in this issue, it is caused by the change that fixed this issue.
I'll look at the new issue and comment over there.
per jglick: probably `job-dsl` should be enhanced to ensure that for a `ComputedFolder` it automatically calls `scheduleBuild` after creation