Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-39682

Generated Multibranch Pipeline Job does not index branches

      Using a JobDSL script to generate a multibranch Pipeline job does not trigger a branch index to find Jenkinsfile

      Here is a simple dsl job

      multibranchPipelineJob(repo) {
        branchSources {
          github {
            scanCredentialsId(credentials)
            repoOwner(credentials)
            repository(repo)
          }
        }
      }
      

      Using this created the job fine but did not trigger a branch scan until I manually triggered a branch index. It also works if you open the multibranch job configuration and save it with no changes.

      Creating a multibranch job directly from the UI works fine.

      The only way I can trigger a branch index is to add a triggers section to the command to periodically scan every minute. I then had to create 3 build steps:

      1. JobDSL to create mutlibranch Pipeline job with a trigger set to 1 minute
      2. Shell step to Sleep for 60 seconds
      3. JobDSL to modify the multibranch Pipeline job and turn off the trigger.

          [JENKINS-39682] Generated Multibranch Pipeline Job does not index branches

          greg oire added a comment -

          You were suggesting to enable the index scan trigger to prevent the initial scan.
          I was just pointing out that it is not always possible as it would result jenkins to make a lot of api call from time to time, that would result in the exact same issue.

          greg oire added a comment - You were suggesting to enable the index scan trigger to prevent the initial scan. I was just pointing out that it is not always possible as it would result jenkins to make a lot of api call from time to time, that would result in the exact same issue.

          Michel Zanini added a comment - - edited

          But you can configure the the trigger to be every day or every week etc. They go randomically and then the API calls are spread and don't cause issues.

          This is what I am using now:

          triggers {   
            periodic(24 * 60) //every 24 hours
          }

          This does not mean that all jobs will trigger the scan every day at the same time. Unless I am wrong they are spread randomically and happen separately.

          Michel Zanini added a comment - - edited But you can configure the the trigger to be every day or every week etc. They go randomically and then the API calls are spread and don't cause issues. This is what I am using now: triggers {   periodic(24 * 60) //every 24 hours } This does not mean that all jobs will trigger the scan every day at the same time. Unless I am wrong they are spread randomically and happen separately.

          Michel Zanini added a comment -

          Also, another suggestion is do what has being done here but only if the job has just being created for the first time, then you trigger the scan, but not on a job update.

          Michel Zanini added a comment - Also, another suggestion is do what has being done here but only if the job has just being created for the first time, then you trigger the scan, but not on a job update.

          René Scheibe added a comment -

          michelzanini triggering a scan is required on job update too as different things might have changed, e.g. the repository, different behaviours (looking for branches and/or pull request and/or tags, filtering of heads and so on...).

          René Scheibe added a comment - michelzanini triggering a scan is required on job update too as different things might have changed, e.g. the repository, different behaviours (looking for branches and/or pull request and/or tags, filtering of heads and so on...).

          Michel Zanini added a comment - - edited

          renescheibe, they might have changed but might NOT as well. Scanning it on every update is expensive because Bitbucket has limits on API calls.

          At least, at minimum, what could be done is to trigger the scan in the future at a random time. For example, choose a random time in a space of 1 hour from now and run at that time. That way the calls will be spread out. Maybe this period could even be configured. That's how periodic triggers work. Actually, again, maybe it is not needed to have it on update. Because if you have it on create, and use a periodic trigger, it should be fine as eventually the job will be corrected.

          At the moment, with the change that has being made, its impossible to update several jobs at the same time. There is no flag, nothing I can do to disable this. So its unusable.

          Michel Zanini added a comment - - edited renescheibe , they might have changed but might NOT as well. Scanning it on every update is expensive because Bitbucket has limits on API calls. At least, at minimum, what could be done is to trigger the scan in the future at a random time. For example, choose a random time in a space of 1 hour from now and run at that time. That way the calls will be spread out. Maybe this period could even be configured. That's how periodic triggers work. Actually, again, maybe it is not needed to have it on update. Because if you have it on create, and use a periodic trigger, it should be fine as eventually the job will be corrected. At the moment, with the change that has being made, its impossible to update several jobs at the same time. There is no flag, nothing I can do to disable this. So its unusable.

          Karol Lassak added a comment -

          michelzanini we also found that issue that all our ~ 800 repos was reindexed every night (ofc one day is not enough to reindex all of them with GH API limits)

          But I found way how to fix it..

          See https://github.com/jenkinsci/job-dsl-plugin/blob/master/job-dsl-plugin/src/main/groovy/javaposse/jobdsl/plugin/JenkinsJobManagement.java#L456
          If your configuration is not changed then updateByXml is not called, so no reindexing is done..

           

          In our case it was even harder to catch cause jobs itself did not changed but folders did (we have some portlets there that have random ID by default)
          We changed them to static IDs and problem is gone..

          Karol Lassak added a comment - michelzanini we also found that issue that all our ~ 800 repos was reindexed every night (ofc one day is not enough to reindex all of them with GH API limits) But I found way how to fix it.. See  https://github.com/jenkinsci/job-dsl-plugin/blob/master/job-dsl-plugin/src/main/groovy/javaposse/jobdsl/plugin/JenkinsJobManagement.java#L456 If your configuration is not changed then  updateByXml is not called, so no reindexing is done..   In our case it was even harder to catch cause jobs itself did not changed but folders did (we have some portlets there that have random ID by default) We changed them to static IDs and problem is gone..

          Thao-Nguyen Do added a comment - - edited

          -I have the same problem as phlogistonthegr8 despite the suggested fix by gregoirew.

          The repository scan fails with: "FATAL: Invalid scan credentials when using <valid credential name>"
          This can be fixed manually by navigating to the job's configuration page and saving the job configuration without any changes.

          However, it is not feasible for the Jenkins instance that I'm managing to do this for every job, as we create a large number of these jobs through Job DSL.
          I tried to looked through the API for some of the involved classes/interfaces, but haven't found anything suitable.-

          def allJobs = Jenkins.instance.getAllItems();
          
          for(item in allJobs) {
            if(item.getClass().equals(org.jenkinsci.plugins.workflow.multibranch.WorkflowMultiBranchProject)) {
              item.save();
              item.getSCMSources().get(0).afterSave()
            }
          }
          

          -Unfortunately, this did not work.

          Is there any solution or workaround for this?

           -- 

          Used plugins: Branch API Plugin 2.5.8, Github Branch Source Plugin 2.8.3-

          Edit: The APIUri in my Job DSL definition was set incorrectly. When Github Enterprise is used it has to end with "/api/v3"

          Thao-Nguyen Do added a comment - - edited - I have the same problem as phlogistonthegr8 despite the suggested fix by gregoirew . The repository scan fails with: "FATAL: Invalid scan credentials when using <valid credential name>" This can be fixed manually by navigating to the job's configuration page and saving the job configuration without any changes. However, it is not feasible for the Jenkins instance that I'm managing to do this for every job, as we create a large number of these jobs through Job DSL. I tried to looked through the API for some of the involved classes/interfaces, but haven't found anything suitable. - def allJobs = Jenkins.instance.getAllItems(); for (item in allJobs) { if (item.getClass().equals(org.jenkinsci.plugins.workflow.multibranch.WorkflowMultiBranchProject)) { item.save(); item.getSCMSources().get(0).afterSave() } } - Unfortunately, this did not work. Is there any solution or workaround for this?  --  Used plugins: Branch API Plugin 2.5.8, Github Branch Source Plugin 2.8.3 - Edit: The APIUri in my Job DSL definition was set incorrectly. When Github Enterprise is used it has to end with "/api/v3"

          Michel Zanini added a comment -

          Almost a year after, and I am still locked to branch-api version 2.5.4 to avoid the issues caused by this change.

           

          I discovered my problem is related to folders, but, unlike what has been described by ingwar, it's seems to be related to having Views inside folders.

          Apparently, if you have views inside folder using job-dsl, they always cause the folder XML to change, and this triggers cascading builds for everything inside those folders. 

          That means every time my Seed Job runs it re-runs hundreds of jobs. I tried everything, but with no luck. The only thing I can do is abandon the Views inside folder and give up on having them. The XML generated by my folders always have the same content, and still it's triggering builds.

           

          I think this should be changed to avoid re-building jobs inside folders. It also could somehow be improved, maybe on job-dsl to avoid re-building a project if little has changed. Or at least, avoid re-building all at once.

           

          I am going to try to create an issue on job-dsl to see if someone can do something about this. It's definitely a pain.

           

          Michel Zanini added a comment - Almost a year after, and I am still locked to branch-api version 2.5.4 to avoid the issues caused by this change.   I discovered my problem is related to folders, but, unlike what has been described by ingwar , it's seems to be related to having Views inside folders. Apparently, if you have views inside folder using job-dsl, they always cause the folder XML to change, and this triggers cascading builds for everything inside those folders.  That means every time my Seed Job runs it re-runs hundreds of jobs. I tried everything, but with no luck. The only thing I can do is abandon the Views inside folder and give up on having them. The XML generated by my folders always have the same content, and still it's triggering builds.   I think this should be changed to avoid re-building jobs inside folders. It also could somehow be improved, maybe on job-dsl to avoid re-building a project if little has changed. Or at least, avoid re-building all at once.   I am going to try to create an issue on job-dsl to see if someone can do something about this. It's definitely a pain.  

          Michel Zanini added a comment -

          I created this issue https://issues.jenkins-ci.org/browse/JENKINS-63344 for my problem on job-dsl.

          Michel Zanini added a comment - I created this issue  https://issues.jenkins-ci.org/browse/JENKINS-63344  for my problem on job-dsl.

          Liam Newman added a comment -

          michelzanini 

          Thank you for opening a new issue, I've closed this one. 
          I did this only because the scenario you're describing is not the scenario in this issue, it is caused by the change that fixed this issue. 

          I'll look at the new issue and comment over there. 

          Liam Newman added a comment - michelzanini   Thank you for opening a new issue, I've closed this one.  I did this only because the scenario you're describing is not the scenario in this issue, it is caused by the change that fixed this issue.  I'll look at the new issue and comment over there. 

            daspilker Daniel Spilker
            hrmpw Patrick Wolf
            Votes:
            7 Vote for this issue
            Watchers:
            22 Start watching this issue

              Created:
              Updated:
              Resolved: