• Icon: Improvement Improvement
    • Resolution: Unresolved
    • Icon: Minor Minor
    • None
    • Currently Jenkins 2.59

      Multibranch pipeline jobs scans for new branches and creates and removes jobs as needed. This is basically a good idea.

      Unfortunately, the scan triggers an immediate build of new jobs when they're found in SCM. In many situations, this is fine - but in some, this is a no-go. I have a lot of jobs that can only run during night as they interrupt services when they're running.

      I can avoid triggering the automatic build by setting the property "Suppress automatic SCM triggering" for all branches.

      But: By doing that, Jenkins doesn't read the Jenkinsfile and doesn't find a command like

      properties([pipelineTriggers([cron('H 5 * * *')])])

      that I have on top of my Jenkinsfiles. Thus the job will never build.

      If I get out of bed early in the morning and run the job once, Jenkins find the cron specification and runs the job properly afterwards.

      It would be very nice - and give me some extra sleep - if the triggers where read at scan time instead of the first run.

      Thanks in advance.

          [JENKINS-44172] Better scan of multibranch pipelines

          Jordan Taylor added a comment -

          Hi,

          We're also hitting a similar issue. When scanning for the multi-branch pipeline job Jenkinsfiles, the appropriate branches appear in Jenkins job. We're also utilising "Suppress automatic SCM triggering".

          The issue that we're having is that the available jobs always only have the 'Build now' option, but we need to 'Build with parameters'. The current workaround is that we initiate the initial run, the Jenkinsfile is scanned and the next run has the appropriate build parameters. However, this is extremely risky for us as it initiates a run with the default parameters, and this still carries out the tasks that a full run would do, just with the default (unwanted) parameters.

          It's quite a big requirement for us to be able to run the job effectively with the correct parameters the first time around.

          Thanks in advance.

          Jordan Taylor added a comment - Hi, We're also hitting a similar issue. When scanning for the multi-branch pipeline job Jenkinsfiles, the appropriate branches appear in Jenkins job. We're also utilising "Suppress automatic SCM triggering". The issue that we're having is that the available jobs always only have the 'Build now' option, but we need to 'Build with parameters'. The current workaround is that we initiate the initial run, the Jenkinsfile is scanned and the next run has the appropriate build parameters. However, this is extremely risky for us as it initiates a run with the default parameters, and this still carries out the tasks that a full run would do, just with the default (unwanted) parameters. It's quite a big requirement for us to be able to run the job effectively with the correct parameters the first time around. Thanks in advance.

          Edgars Batna added a comment - - edited

          Following features need to be clearly separated:

          1. Load Jobs into Jenkins from SCM
          2. Execute these Jobs

          The scanning of a multibranch pipeline should not be triggering anything at all and I bet there are other problems that require workarounds because of this.

          Edgars Batna added a comment - - edited Following features need to be clearly separated: Load Jobs into Jenkins from SCM Execute these Jobs The scanning of a multibranch pipeline should not be triggering anything at all and I bet there are other problems that require workarounds because of this.

          Tim Webster added a comment - - edited

          Another reason to disable automatic building after the scan is if you have all your jobs in jobdsl and frequently rebuild it.  We use Jenkins which is 100% configured with code running in a container.  If there is a major configuration change (i.e. changing plugins), the container needs to be re-created.  For this reason we also don't persist the workspace.

          If you re-create the container and it does a scan, you get a gazillion jobs running all of a sudden. For this reason we ditched multibranch.  However I'm on another project now that uses Github Organizations, which uses multibranch implicitly so having to face this again now.

          Also from what I've read if you can disable automatic SCM triggering as a workaroud, but this will  also disable builds from webhooks (correct me if I'm wrong)

          Tim Webster added a comment - - edited Another reason to disable automatic building after the scan is if you have all your jobs in jobdsl and frequently rebuild it.  We use Jenkins which is 100% configured with code running in a container.  If there is a major configuration change (i.e. changing plugins), the container needs to be re-created.  For this reason we also don't persist the workspace. If you re-create the container and it does a scan, you get a gazillion jobs running all of a sudden. For this reason we ditched multibranch.  However I'm on another project now that uses Github Organizations, which uses multibranch implicitly so having to face this again now. Also from what I've read if you can disable automatic SCM triggering as a workaroud, but this will  also disable builds from webhooks (correct me if I'm wrong)

          Mark Wright added a comment -

          Just to reiterate the comments of gl1koz3 and timwebster9, we have a large number of jobs configured with the job DSL. Restarting Jenkins for any reason is a nightmare, as the branch scanning triggers a multitude of jobs. We generally have to spend some time killing the jobs in order to free resources for the builds we actually want.

          Like Tim, I'm under the impression that the only workaround is to disable the automatic SCM triggering which would also disable triggering from webhooks (which we don't want to do).

          Mark Wright added a comment - Just to reiterate the comments of gl1koz3 and timwebster9 , we have a large number of jobs configured with the job DSL. Restarting Jenkins for any reason is a nightmare, as the branch scanning triggers a multitude of jobs. We generally have to spend some time killing the jobs in order to free resources for the builds we actually want. Like Tim, I'm under the impression that the only workaround is to disable the automatic SCM triggering which would also disable triggering from webhooks (which we don't want to do).

          Jason Parraga added a comment -

          After moving to multi branch pipelines our company has experienced issues with the branch scanning. We have a few integration test style jobs which run for about 2 hours and each job may have between 20-40 open pull requests (arguably, this is an issue itself). Anyway, every time the branches are scanned our build queue blows up and we cannot check in code for roughly a day unless we spend time killing many of those jobs. Since we have a few jobs like this, this happens a couple times a week.

          We would really benefit from being able to specify a cron expression for scanning. This would allow us to at least scan during off peak hours like on the weekends so there is minimal disruption. Right now we are at the mercy of the configured interval which I assume begins when you make the interval configuration. 

          Jason Parraga added a comment - After moving to multi branch pipelines our company has experienced issues with the branch scanning. We have a few integration test style jobs which run for about 2 hours and each job may have between 20-40 open pull requests (arguably, this is an issue itself). Anyway, every time the branches are scanned our build queue blows up and we cannot check in code for roughly a day unless we spend time killing many of those jobs. Since we have a few jobs like this, this happens a couple times a week. We would really benefit from being able to specify a cron expression for scanning. This would allow us to at least scan during off peak hours like on the weekends so there is minimal disruption. Right now we are at the mercy of the configured interval which I assume begins when you make the interval configuration. 

          Lorenz Aebi added a comment -

          Is there any progress on this? Or are there other solutions to solve the problem with polling the SCM?

          Lorenz Aebi added a comment - Is there any progress on this? Or are there other solutions to solve the problem with polling the SCM?

            Unassigned Unassigned
            larsskj Lars Skjærlund
            Votes:
            27 Vote for this issue
            Watchers:
            28 Start watching this issue

              Created:
              Updated: