In my case, adding an ID to JobDSL script actually solved couple of weird issues (one multibranch job was automatically triggered after branch indexing, another one had above error).
Every branch source is required to have an non-null ID. The ID must be unique across all the branch sources within the multibranch job. In other words if your job has one and only one source, then just give it the id of i-am-unique. If will not matter if you have 50 or 5000 multibranch jobs each with single different source and all those sources with an id of i-am-unique as the ID is used to differentiate the origin of branches within the single multibranch project, so we do not care if other different multibranch projects have sources with the same id. https://issues.jenkins-ci.org/browse/JENKINS-48571 contains comments from me that may help
unlikelyzero you are in the first cohort I describe from my analysis in https://issues.jenkins-ci.org/browse/JENKINS-48571?focusedCommentId=329111&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-329111 namely you are being caught by the bug in BlueOcean where it is not setting the ID.
To be clear, not assigning an ID is a bug in BlueOcean as SCM API has no way to determine if a SCMHead of master is coming from one source or another source in the event that you have two sources configured... previously BlueOcean was relying on a side-effect that would accidentally assign a non-null id before the configuration was saved... however, what this means is that every time BlueOcean updated the job, a new randomly generated id would be assigned, and consequently a rebuild storm would be triggered on the next full scan. Because reconfiguring a job should not result in a rebuild storm (as distinct from a build storm), and the rebuild storm is a result of the failure to supply a non-null id that reflects the fact that in a BlueOcean managed multibranch project, there is one and only one source then the failure to supply an id is a bug in BlueOcean.
In otherwords, the accidental side-effect that BlueOcean was relying on was causing a more subtle bug that would have been hard for users to identify...
- a build storm occurs when you change the configuration to discover new branches that were not discovered before
- a rebuild storm occurs when you change the configuration and all the existing branches get rebuilt
There are potentially some cases where you might actually want a rebuild storm (e.g. say you added a "clean before checkout" trait) but the branch-api plugin will err on the side of caution and only rebuild if the ID has changed, assuming that changes impacting the SCMs generated by a SCMSource with the same ID will be caught by users retriggering the branches that need a retrigger... after all branch-api does not know which changes within an SCMSource are significant... but if you Add a new SCMSource and remove the old one (as distinct from modifying the existing one's configuration)... well that is a significant change... looks like it could be a whole different repository, we need to rebuild that.
The behaviour that BlueOcean is manifesting - from the point of view of branch-api - looks indistinguishable from Remove-followed-by-Add as distinct from Update... consequently branch-api will interpret that as "The source has changed, rebuild everything". BlueOcean should probably just be specifying an ID of blueocean and that would fix the issues for BlueOcean users.
The good news for BlueOcean users is the workaround (for the tip revision error) is simple, namely save the job in the classic UI after creation by BlueOcean... when you load the classic config screen, that will round-trip the randomly assigned ID that is stored in memory but has missed being saved to disk and persist it to disk. Sadly, if you subsequently modify the job using BlueOcean, you will again need to save the job in the classic UI again, but you will still have the side-effect of a build storm as BlueOcean's update will have assigned an id of null and the first call to SCMSource.getId() will have generated a new random id which is highly unlikely to match the previous random id and consequently a rebuild storm will occur.
Yes, I am using JobDSL to create and update those jobs.