• Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Critical Critical
    • build-blocker-plugin
    • None
    • Jenkins 1.644

      Job A trigger 8 other, secondary jobs.
      Each secondary jobs has other 7 jobs referenced in the blocking list of jobs.
      All secondary jobs have exactly the same waiting period before start.
      Our Jenkins has 4 executors.
      Some how multiple (more than one) build jobs picked up for execution at very beginning, than remaining ones executed one by one.
      We have seen for Throttle concurrent builds plugin the same problem also

          [JENKINS-32903] Build-Blocker-Plugin does not block

          David Carlton added a comment - - edited

          I'm seeing a block problem, too - I filed JENKINS-34206, because my symptoms are somewhat different, but it wouldn't surprise me if it has the same root cause.

          (If so, downgrading to a version before 1.618 might work.)

          David Carlton added a comment - - edited I'm seeing a block problem, too - I filed JENKINS-34206 , because my symptoms are somewhat different, but it wouldn't surprise me if it has the same root cause. (If so, downgrading to a version before 1.618 might work.)

          pjdarton added a comment -

          "Me too", using Jenkins 2.7.4 and Build Blocker Plugin version 1.7.3 (running on Windows), connected to a lot of slaves (mostly Windows).

          My situation is that I've got four jobs:
          Hosted_Snapshot, which has a downstream job...
          Hosted_System_Test, which runs quickly and has a parameterised downstream job...
          Hosted_System_Test_Deploy, which is a matrix (multi-configuration) build which takes a while to run and has a downstream job...
          Hosted_System_Test_Tests which is a matrix (multi-configuration) build which also takes a while to run.

          Hosted_Snapshot does not block on anything, and regularly runs and makes sure that a build of Hosted_System_Test is queued. As this is not a parameterised downstream job, these don't stack, we only end up with at most one build of Hosted_System_Test queued at any one time.

          Hosted_System_Test is set to block on Hosted_System_Test_Deploy and Hosted_System_Test_Tests.
          Each time it does run, it queues a parameterised run of Hosted_System_Test_Deploy. As these are parameterised, these will stack up (so we can have multiple builds in the queue with different parameters), but as we are configured to block we should never have more than one in the queue.

          Hosted_System_Test_Deploy is set to block on Hosted_System_Test_Tests.

          What I see is that I've got multiple runs of Hosted_System_Test_Deploy (with different parameters) in the queue. "That shouldn't happen"!
          Hosted_System_Test should not be able to run at all while Hosted_System_Test_Deploy is either queued or running.

          What is expected is that Hosted_System_Test should stay in the queue (and not run) until Hosted_System_Test_Deploy is completely finished.
          What actually happens is that Hosted_System_Test doesn't always block correctly and this results in Hosted_System_Test being run when it shouldn't and hence triggering a downstream parameterised build of Hosted_System_Test_Deploy even though Hosted_System_Test_Deploy is already running.

          Personally, my guess is that there is a race condition when a build is taken from the queue and run on a slave - I suspect that the build "disappears" from the queue before it "appears" on the slave's executor, so if the blocker plugin gets asked if it's ok to start a build just at that point in time, the job which would cause the "block" is neither in the queue or running. However, that's just a guess based on the symptoms I see.

          pjdarton added a comment - "Me too", using Jenkins 2.7.4 and Build Blocker Plugin version 1.7.3 (running on Windows), connected to a lot of slaves (mostly Windows). My situation is that I've got four jobs: Hosted_Snapshot , which has a downstream job... Hosted_System_Test , which runs quickly and has a parameterised downstream job... Hosted_System_Test_Deploy , which is a matrix (multi-configuration) build which takes a while to run and has a downstream job... Hosted_System_Test_Tests which is a matrix (multi-configuration) build which also takes a while to run. Hosted_Snapshot does not block on anything, and regularly runs and makes sure that a build of Hosted_System_Test is queued. As this is not a parameterised downstream job, these don't stack, we only end up with at most one build of Hosted_System_Test queued at any one time. Hosted_System_Test is set to block on Hosted_System_Test_Deploy and Hosted_System_Test_Tests . Each time it does run, it queues a parameterised run of Hosted_System_Test_Deploy . As these are parameterised, these will stack up (so we can have multiple builds in the queue with different parameters), but as we are configured to block we should never have more than one in the queue. Hosted_System_Test_Deploy is set to block on Hosted_System_Test_Tests . What I see is that I've got multiple runs of Hosted_System_Test_Deploy (with different parameters) in the queue. "That shouldn't happen" ! Hosted_System_Test should not be able to run at all while Hosted_System_Test_Deploy is either queued or running. What is expected is that Hosted_System_Test should stay in the queue (and not run) until Hosted_System_Test_Deploy is completely finished. What actually happens is that Hosted_System_Test doesn't always block correctly and this results in Hosted_System_Test being run when it shouldn't and hence triggering a downstream parameterised build of Hosted_System_Test_Deploy even though Hosted_System_Test_Deploy is already running. Personally, my guess is that there is a race condition when a build is taken from the queue and run on a slave - I suspect that the build "disappears" from the queue before it "appears" on the slave's executor, so if the blocker plugin gets asked if it's ok to start a build just at that point in time, the job which would cause the "block" is neither in the queue or running. However, that's just a guess based on the symptoms I see.

          pjdarton added a comment -

          FYI one of my colleagues found why it wasn't working for us.
          We were configuring the build blocker plugin using the Jobs DSL plugin with code saying:

          Iterable<String> jobsToBlockOn = ...
          job.with {
              blockOn(jobsToBlockOn)
          }
          

          and this worked for us for a while.
          Then we upgraded to Jenkins 2.7.4 and DSL plugin 1.51 and to the current build blocker plugin, and some time later we noticed that builds weren't blocking.
          The fix, in our case, was to change the code to be:

          Iterable<String> jobsToBlockOn = ...
          job.with {
              blockOn(jobsToBlockOn) {
                  blockLevel('GLOBAL')
                  scanQueueFor('BUILDABLE')
              }
          }
          

          i.e. We needed to add a new blockLevel and scanQueueFor element into the configuration.

          pjdarton added a comment - FYI one of my colleagues found why it wasn't working for us. We were configuring the build blocker plugin using the Jobs DSL plugin with code saying: Iterable< String > jobsToBlockOn = ... job.with { blockOn(jobsToBlockOn) } and this worked for us for a while. Then we upgraded to Jenkins 2.7.4 and DSL plugin 1.51 and to the current build blocker plugin, and some time later we noticed that builds weren't blocking. The fix, in our case, was to change the code to be: Iterable< String > jobsToBlockOn = ... job.with { blockOn(jobsToBlockOn) { blockLevel( 'GLOBAL' ) scanQueueFor( 'BUILDABLE' ) } } i.e. We needed to add a new blockLevel and scanQueueFor element into the configuration.

          The same problem on Jenkins 2.40.
          The job A is configured to be blocked on upstream and downstream build jobs.
          Another job B, upstream to job A appeared in the queue.
          Job A is started without wait for start and end of the job B!

          Grigoriy Milman added a comment - The same problem on Jenkins 2.40. The job A is configured to be blocked on upstream and downstream build jobs. Another job B, upstream to job A appeared in the queue. Job A is started without wait for start and end of the job B!

            Unassigned Unassigned
            gremlm Grigoriy Milman
            Votes:
            1 Vote for this issue
            Watchers:
            6 Start watching this issue

              Created:
              Updated: