Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-21044

Throttle Concurrent Builds blocking Jenkins queue

      Jenkins had stopped responding to browser requests for Jenkins pages and I think it may be caused by the recent upgrade to Throttle Concurrent Builds 1.8.1

      Requests were getting blocked waiting on 0x00000004181e4520

      "Handling GET /jenkins/ : RequestHandlerThread[#171]" daemon prio=10 tid=0x00000000168ee800 nid=0x193b waiting for monitor entry [0x000000004335b000]
         java.lang.Thread.State: BLOCKED (on object monitor)
      	at hudson.model.Queue.getItems(Queue.java:687)
      	- waiting to lock <0x00000004181e4520> (a hudson.model.Queue)
      	at hudson.model.Queue$CachedItemList.get(Queue.java:216)
      	at hudson.model.Queue.getApproximateItemsQuickly(Queue.java:717)
      	at hudson.model.View.getApproximateQueueItemsQuickly(View.java:483)
      	at sun.reflect.GeneratedMethodAccessor355.invoke(Unknown Source)
      	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      

      This seems to be waiting on Throttle Concurrent Builds code.
      Further dumps after 10 minutes, 20 minutes and 30 minutes showed this same stack trace.

      "Thread-126" daemon prio=10 tid=0x00002aaae0529800 nid=0x1785 runnable [0x0000000046590000]
         java.lang.Thread.State: RUNNABLE
      	at java.util.WeakHashMap$HashIterator.hasNext(WeakHashMap.java:875)
      	at java.util.AbstractCollection.toArray(AbstractCollection.java:139)
      	at java.util.ArrayList.<init>(ArrayList.java:164)
      	at hudson.plugins.throttleconcurrents.ThrottleJobProperty.getCategoryProjects(ThrottleJobProperty.java:141)
      	- locked <0x000000041a79b778> (a java.util.HashMap)
      	at hudson.plugins.throttleconcurrents.ThrottleQueueTaskDispatcher.canRun(ThrottleQueueTaskDispatcher.java:118)
      	at hudson.plugins.throttleconcurrents.ThrottleQueueTaskDispatcher.canRun(ThrottleQueueTaskDispatcher.java:90)
      	at hudson.model.Queue.isBuildBlocked(Queue.java:937)
      	at hudson.model.Queue.maintain(Queue.java:1006)
      	- locked <0x00000004181e4520> (a hudson.model.Queue)
      	at hudson.model.Queue$1.call(Queue.java:303)
      	at hudson.model.Queue$1.call(Queue.java:300)
      	at jenkins.util.AtmostOneTaskExecutor$1.call(AtmostOneTaskExecutor.java:69)
      	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
      	at hudson.remoting.AtmostOneThreadExecutor$Worker.run(AtmostOneThreadExecutor.java:104)
      	at java.lang.Thread.run(Thread.java:724)
      
         Locked ownable synchronizers:
      	- None
      

      CPU usage was at ~100% for this thread for the 30 minutes that I was watching it before I restarted Jenkins.
      (6021 = 0x1785)

        PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                                                                                                                                   
       6021 rcbuild_  35  10 18.0g 6.2g  32m R 99.7 19.7  30:45.97 java     
      

      I have rolled back to Throttle Concurrent Builds 1.8 for now.

      Still learning how to investigate thread dumps but please let me know if there is anything I can do to help.

          [JENKINS-21044] Throttle Concurrent Builds blocking Jenkins queue

          Oleg Nenashev added a comment -

          Could you briefly describe your case (number of projects, categories structure, frequency of job submissions, etc.)?

          Oleg Nenashev added a comment - Could you briefly describe your case (number of projects, categories structure, frequency of job submissions, etc.)?

          Oleg Nenashev added a comment -

          BTW, I confirm that the current implementation may cause huge calculation efforts in the case of big number of jobs with a same category:

          • keySet() causes a useless copy of the data
          • copy to a new array leads to an additional copy of items

          It makes sense to extend a the cache's lock time, but to iterate the HashMap directly.

          I'll reassign issue to Jesse Glick, who is an author of https://github.com/jenkinsci/throttle-concurrent-builds-plugin/pull/10
          Jesse, do you have some time to fix the issue? If no, I can do it on my own

          Oleg Nenashev added a comment - BTW, I confirm that the current implementation may cause huge calculation efforts in the case of big number of jobs with a same category: keySet() causes a useless copy of the data copy to a new array leads to an additional copy of items It makes sense to extend a the cache's lock time, but to iterate the HashMap directly. I'll reassign issue to Jesse Glick, who is an author of https://github.com/jenkinsci/throttle-concurrent-builds-plugin/pull/10 Jesse, do you have some time to fix the issue? If no, I can do it on my own

          Jesse Glick added a comment -

          I do not actually run this plugin anywhere so I have no real way of confirming whether a given change improves or degrades performance on a large installation.

          Extending the scope of the lock on propertiesByCategory may avoid overhead, but runs the risk of deadlocks, if foreign calls such as getItem are included.

          Jesse Glick added a comment - I do not actually run this plugin anywhere so I have no real way of confirming whether a given change improves or degrades performance on a large installation. Extending the scope of the lock on propertiesByCategory may avoid overhead, but runs the risk of deadlocks, if foreign calls such as getItem are included.

          Jesse Glick added a comment -

          BTW the actual thread dump shows an issue in WeakHashMap.HashIterator.hasNext, such as an endless loop. The keySet method does not copy any data, and there is no indication that the array copy overhead is relevant. If I am right, iterating the map directly would not make any difference.

          What JRE is being used to run Jenkins? If not the most recent Java 7, try updating and see if the issue remains.

          Jesse Glick added a comment - BTW the actual thread dump shows an issue in WeakHashMap.HashIterator.hasNext , such as an endless loop. The keySet method does not copy any data, and there is no indication that the array copy overhead is relevant. If I am right, iterating the map directly would not make any difference. What JRE is being used to run Jenkins? If not the most recent Java 7, try updating and see if the issue remains.

          centic added a comment - - edited

          I have a very similar issue with 1.8.1 compared to 1.8, only for me it goes into 100%CPU during startup of Jenkins and never shows up the Dashboard any more, keeps saying Jenkins is still starting up.

          I think this is not a performance problem but rather some sort of endless loop, or at least a very big loop if this simple put runs for over half an hour!

          I'm on 1.7.0_45

          Reverting to 1.8 fixed it for me for now...

          CPU Sampling Content

          Level Name Class CPU Time [ms] ↓ Total CPU Time [ms] Execution Time [ms] Total Time [ms] Wait time [ms]
          1 setOwner(AbstractProject) ThrottleJobProperty 1.00 1.00 71700 71700 71650.00
          2 setOwner(Job) ThrottleJobProperty 0.00 1.00 0 71700 0.00
          3 onLoad(ItemGroup, String) Job 0.00 1.00 0 71700 0.00
          4 onLoad(ItemGroup, String) AbstractProject 0.00 1.00 0 71700 0.00
          5 onLoad(ItemGroup, String) Project 0.00 1.00 0 47800 0.00
          6 load(ItemGroup, File) Items 0.00 1.00 0 47800 0.00
          7 run(Reactor) Jenkins$18 0.00 1.00 0 47800 0.00
          8 run(Reactor) TaskGraphBuilder$TaskImpl 0.00 1.00 0 47800 0.00
          9 runTask(Task) Reactor 0.00 1.00 0 47800 0.00
          10 runTask(Task) Jenkins$7 0.00 1.00 0 47800 0.00

          centic added a comment - - edited I have a very similar issue with 1.8.1 compared to 1.8, only for me it goes into 100%CPU during startup of Jenkins and never shows up the Dashboard any more, keeps saying Jenkins is still starting up. I think this is not a performance problem but rather some sort of endless loop, or at least a very big loop if this simple put runs for over half an hour! I'm on 1.7.0_45 Reverting to 1.8 fixed it for me for now... CPU Sampling Content Level Name Class CPU Time [ms] ↓ Total CPU Time [ms] Execution Time [ms] Total Time [ms] Wait time [ms] 1 setOwner(AbstractProject) ThrottleJobProperty 1.00 1.00 71700 71700 71650.00 2 setOwner(Job) ThrottleJobProperty 0.00 1.00 0 71700 0.00 3 onLoad(ItemGroup, String) Job 0.00 1.00 0 71700 0.00 4 onLoad(ItemGroup, String) AbstractProject 0.00 1.00 0 71700 0.00 5 onLoad(ItemGroup, String) Project 0.00 1.00 0 47800 0.00 6 load(ItemGroup, File) Items 0.00 1.00 0 47800 0.00 7 run(Reactor) Jenkins$18 0.00 1.00 0 47800 0.00 8 run(Reactor) TaskGraphBuilder$TaskImpl 0.00 1.00 0 47800 0.00 9 runTask(Task) Reactor 0.00 1.00 0 47800 0.00 10 runTask(Task) Jenkins$7 0.00 1.00 0 47800 0.00

          centic added a comment -

          Debugging some more seems to indicate that it actually loops in the java.util.WeakHashMap in the last two lines of the following:

          public V put(K key, V value) {
          Object k = maskNull(key);
          int h = hash(k);
          Entry<K,V>[] tab = getTable();
          int i = indexFor(h, tab.length);

          for (Entry<K,V> e = tab[i]; e != null; e = e.next) {
          if (h == e.hash && eq(k, e.get())) {

          somehow the tab contains an null-element which points to itself via e.next, thus the code is looping endlessly.

          Googling shows that this can happen if the WeakHashMap is used in multiple threads without proper synchronization, e.g. https://java.net/jira/browse/JAVASERVERFACES-2544 and https://issues.apache.org/bugzilla/show_bug.cgi?id=50078, http://www.adam-bien.com/roller/abien/entry/endless_loops_in_unsychronized_weakhashmap

          However after a quick glance I do not see where the code in the plugin uses the Map outside the synchronized block, how can it have multi-threading issues then?!

          centic added a comment - Debugging some more seems to indicate that it actually loops in the java.util.WeakHashMap in the last two lines of the following: public V put(K key, V value) { Object k = maskNull(key); int h = hash(k); Entry<K,V>[] tab = getTable(); int i = indexFor(h, tab.length); for (Entry<K,V> e = tab [i] ; e != null; e = e.next) { if (h == e.hash && eq(k, e.get())) { somehow the tab contains an null-element which points to itself via e.next, thus the code is looping endlessly. Googling shows that this can happen if the WeakHashMap is used in multiple threads without proper synchronization, e.g. https://java.net/jira/browse/JAVASERVERFACES-2544 and https://issues.apache.org/bugzilla/show_bug.cgi?id=50078 , http://www.adam-bien.com/roller/abien/entry/endless_loops_in_unsychronized_weakhashmap However after a quick glance I do not see where the code in the plugin uses the Map outside the synchronized block, how can it have multi-threading issues then?!

          Michael Niestegge added a comment - - edited

          I ran into the same problems now:

          I use Jenkins 1.547 on Debian 5.0.10 with Throttle Concurrent Builds 1.8.1

          At midnight several jobs are triggered which are in the same group causing Jenkins webinterface to freeze (process at ~100% cpu). The only solution is to kill the process. After removing the plugin everything runs fine again. In my opinion the plugin is not usable as long as the error is not fixed.

          Switching back to 1.8 solved the issue for me.

          Michael Niestegge added a comment - - edited I ran into the same problems now: I use Jenkins 1.547 on Debian 5.0.10 with Throttle Concurrent Builds 1.8.1 At midnight several jobs are triggered which are in the same group causing Jenkins webinterface to freeze (process at ~100% cpu). The only solution is to kill the process. After removing the plugin everything runs fine again. In my opinion the plugin is not usable as long as the error is not fixed. Switching back to 1.8 solved the issue for me.

          Oleg Nenashev added a comment -

          @centic, for the issue's analysis

          The issue has been introduce by https://github.com/jenkinsci/throttle-concurrent-builds-plugin/commit/b1f3b836ecd1fbd7a141c89469635c1ca5838dcf committed by @glick

          I home to find some time to fix the issue on the next week (probably, Jesse will do it before me)
          I'll put the info to the plugin's page as well.

          Oleg Nenashev added a comment - @centic, for the issue's analysis The issue has been introduce by https://github.com/jenkinsci/throttle-concurrent-builds-plugin/commit/b1f3b836ecd1fbd7a141c89469635c1ca5838dcf committed by @glick I home to find some time to fix the issue on the next week (probably, Jesse will do it before me) I'll put the info to the plugin's page as well.

          Jesse Glick added a comment -

          No idea for a fix from me, sorry. The tip about unsynchronized access sounds plausible, but as you note, all accesses to the map should be synchronized from the same monitor. If you can reproduce the bug, then there are various ways of proceeding with debugging, such as switching to a (strong) HashMap to see if the endless loop goes away; this would of course introduce a memory leak, but at least you would have narrowed down the problem.

          Jesse Glick added a comment - No idea for a fix from me, sorry. The tip about unsynchronized access sounds plausible, but as you note, all accesses to the map should be synchronized from the same monitor. If you can reproduce the bug, then there are various ways of proceeding with debugging, such as switching to a (strong) HashMap to see if the endless loop goes away; this would of course introduce a memory leak, but at least you would have narrowed down the problem.

          centic added a comment -

          Some more info for anybody trying to work on this, I did some more investigation and found out the following:

          • Replacing the WeakHashMap with HashMap is tricky as the configuration XML of the plugin also contains the type of Map, so you need to replace it there as well otherwise you'll still end up with WeakHashMaps (cost me a couple hours to find that out!)
          • I think this is also the reason why the problem happens in the first place, the config-serialization is reflection-based and constructs the WeakHashMap outside of the synchronization, thus multiple threads still can access the data concurrently and because of this no synchronization whatsoever will fix it!
          • That also explains why you do not see it always. It only happens if you have a number of throttles defined and use them in multiple jobs. Also starting up will sometimes work, but it may still fail later when the properties are accessed when jobs are started.

          centic added a comment - Some more info for anybody trying to work on this, I did some more investigation and found out the following: Replacing the WeakHashMap with HashMap is tricky as the configuration XML of the plugin also contains the type of Map, so you need to replace it there as well otherwise you'll still end up with WeakHashMaps (cost me a couple hours to find that out!) I think this is also the reason why the problem happens in the first place, the config-serialization is reflection-based and constructs the WeakHashMap outside of the synchronization, thus multiple threads still can access the data concurrently and because of this no synchronization whatsoever will fix it! That also explains why you do not see it always. It only happens if you have a number of throttles defined and use them in multiple jobs. Also starting up will sometimes work, but it may still fail later when the properties are accessed when jobs are started.

          Jesse Glick added a comment -

          propertiesByCategory is not intended to be serialized at all, so that is probably the issue. May need to be made transient (and restored in readResolve); or making it static might suffice.

          Jesse Glick added a comment - propertiesByCategory is not intended to be serialized at all, so that is probably the issue. May need to be made transient (and restored in readResolve ); or making it static might suffice.

          Andrew Bayer added a comment -

          Jesse, Oleg, are you guys working on this? If not, I think we should probably do a rollback and re-release the old 1.8.0 again as 1.8.2 to trump the broken 1.8.1...

          Andrew Bayer added a comment - Jesse, Oleg, are you guys working on this? If not, I think we should probably do a rollback and re-release the old 1.8.0 again as 1.8.2 to trump the broken 1.8.1...

          Oleg Nenashev added a comment -

          @abayer
          I've started working on the issue.
          BTW, I cannot re-test the fix now due to the vacation.

          If somebody has a relevant dev. system, I can provide a test build.

          Oleg Nenashev added a comment - @abayer I've started working on the issue. BTW, I cannot re-test the fix now due to the vacation. If somebody has a relevant dev. system, I can provide a test build.

          Tim Pizey added a comment -

          I agree with abayer.

          We have lost many person hours by assuming no harm would come from installing this update.

          This is the first time in my use of jenkins that updating to the latest version has bitten me.

          There should be a mechanism to prevent a plugin with such an issue being presented to end users.

          Tim Pizey added a comment - I agree with abayer. We have lost many person hours by assuming no harm would come from installing this update. This is the first time in my use of jenkins that updating to the latest version has bitten me. There should be a mechanism to prevent a plugin with such an issue being presented to end users.

          Oleg Nenashev added a comment -

          > We have lost many person hours by assuming no harm would come from installing this update.
          There is a notification on the plugin's Wiki page
          > This is the first time in my use of jenkins that updating to the latest version has bitten me.
          Heh, you are really lucky. BTW, you are right that such approach hardly affects users.
          I'm responsible for this f*ckup, because I've released a version without long-run testing on development servers.
          BTW, we have been waiting for @abayer's review of PR #6 and PR #10 for more than 1 month...

          > There should be a mechanism to prevent a plugin with such an issue being presented to end users.
          AFAIK in the open-source version of update center there is no such features.

          Oleg Nenashev added a comment - > We have lost many person hours by assuming no harm would come from installing this update. There is a notification on the plugin's Wiki page > This is the first time in my use of jenkins that updating to the latest version has bitten me. Heh, you are really lucky. BTW, you are right that such approach hardly affects users. I'm responsible for this f*ckup, because I've released a version without long-run testing on development servers. BTW, we have been waiting for @abayer's review of PR #6 and PR #10 for more than 1 month... > There should be a mechanism to prevent a plugin with such an issue being presented to end users. AFAIK in the open-source version of update center there is no such features.

          We had the same issue after updating the plugins. After many hours we could identify the issue in this plugin (and found the warning on the wiki page).

          I have a relevant system. You can send me the test build to test the fix.

          Marcel Wiederkehr added a comment - We had the same issue after updating the plugins. After many hours we could identify the issue in this plugin (and found the warning on the wiki page). I have a relevant system. You can send me the test build to test the fix.

          Andrew Bayer added a comment -

          Oleg - sorry about that. I get so many PR emails from all of the Jenkins repos that I don't notice all of the relevant ones. IF there's anything you need a review on now, I can take a look, but in general, I trust Jesse to be better than me at knowing what code's likely to cause issues anyway. =)

          Andrew Bayer added a comment - Oleg - sorry about that. I get so many PR emails from all of the Jenkins repos that I don't notice all of the relevant ones. IF there's anything you need a review on now, I can take a look, but in general, I trust Jesse to be better than me at knowing what code's likely to cause issues anyway. =)

          Jesse Glick added a comment -

          I could prototype a fix if my help is needed, but I have no realistic test environment for it.

          Jesse Glick added a comment - I could prototype a fix if my help is needed, but I have no realistic test environment for it.

          Oleg Nenashev added a comment -

          Finally, I have reproduced the issue.
          It occurs on configuration-reload operations with many jobs.

          • According to comments above, the internal categories cache is being dumped to hudson.plugins.throttleconcurrents.ThrottleJobProperty.xml
          • The property's loading operations are not synchronized
          • Seems that ThrottleQueueTaskDispatcher tries to access the data before the complete loading of the plugin's configuration. In my case it occurs during the queue's loading

          I'll create the PR soon.
          BTW, we will have to add an additional lock object to prevent issues on persistence.

          Oleg Nenashev added a comment - Finally, I have reproduced the issue. It occurs on configuration-reload operations with many jobs. According to comments above, the internal categories cache is being dumped to hudson.plugins.throttleconcurrents.ThrottleJobProperty.xml The property's loading operations are not synchronized Seems that ThrottleQueueTaskDispatcher tries to access the data before the complete loading of the plugin's configuration. In my case it occurs during the queue's loading I'll create the PR soon. BTW, we will have to add an additional lock object to prevent issues on persistence.

          Oleg Nenashev added a comment -

          Added the local build with a probable fix.
          https://github.com/jenkinsci/throttle-concurrent-builds-plugin/pull/13

          I've tried about 10 restarts with big queues, but the issue has not been reproduced. BTW, it would be great to have a light-weight unit test.

          Oleg Nenashev added a comment - Added the local build with a probable fix. https://github.com/jenkinsci/throttle-concurrent-builds-plugin/pull/13 I've tried about 10 restarts with big queues, but the issue has not been reproduced. BTW, it would be great to have a light-weight unit test.

          Jesse Glick added a comment -

          While that PR may work, I think it would be better to just make the cache be transient. I never had any intention of its being persisted to disk, so if that is what is happening, that was purely an accident (XStream automagically finding stuff and saving it). The cache would just need to be recreated if and when a job category is changed or the cache is requested, but that is pretty simple synchronization.

          Jesse Glick added a comment - While that PR may work, I think it would be better to just make the cache be transient. I never had any intention of its being persisted to disk, so if that is what is happening, that was purely an accident (XStream automagically finding stuff and saving it). The cache would just need to be recreated if and when a job category is changed or the cache is requested, but that is pretty simple synchronization.

          Oleg Nenashev added a comment -

          @Jesse
          I've made the cache transient.
          All other changes just provide the safe migration procedure if there is any cache data on the disk.
          The load procedure is synchronized to avoid concurrency. After that, the code just re-saves the configuration in order to purge wrong data from configs.

          Oleg Nenashev added a comment - @Jesse I've made the cache transient. All other changes just provide the safe migration procedure if there is any cache data on the disk. The load procedure is synchronized to avoid concurrency. After that, the code just re-saves the configuration in order to purge wrong data from configs.

          Dirk Kuypers added a comment -

          We are testing the attached version in our production environment since about 4 hours now. Works like a charm until now.

          We are consolidating two jenkins masters into one machine with about 15 Slaves now, 3000 jobs altogether and quite some continuously running jobs with concurrent builds that are heavily loading the about 100 cores. We even had severe problems with blocked threads when we rolled back to 1.8.0! Funny enough I was using the version 1.8.1 on "my" old master without problems before (even more jobs, same amount of nodes) and using the throttle concurrent builds plugin was "my" idea.

          Dirk Kuypers added a comment - We are testing the attached version in our production environment since about 4 hours now. Works like a charm until now. We are consolidating two jenkins masters into one machine with about 15 Slaves now, 3000 jobs altogether and quite some continuously running jobs with concurrent builds that are heavily loading the about 100 cores. We even had severe problems with blocked threads when we rolled back to 1.8.0! Funny enough I was using the version 1.8.1 on "my" old master without problems before (even more jobs, same amount of nodes) and using the throttle concurrent builds plugin was "my" idea.

          Oleg Nenashev added a comment -

          The new version works for me as well.

          @abayer, do you confirm the merge?

          Oleg Nenashev added a comment - The new version works for me as well. @abayer, do you confirm the merge?

          Code changed in jenkins
          User: Oleg Nenashev
          Path:
          src/main/java/hudson/plugins/throttleconcurrents/ThrottleJobProperty.java
          http://jenkins-ci.org/commit/throttle-concurrent-builds-plugin/9b7562d4b08e0a4202130d43307082553142df82
          Log:
          [FIXED JENKINS-21044] - Throttling blocks the Jenkins queue

          Seems the issue was in improper usage of WeakHashMap (see analysis from @centic).
          I've managed to reproduce the behavior in the following case:

          • There is a big number of jobs/configurations with throttling
          • The builds queue is not empty
            // Seems that ThrottleQueueTaskDispatcher tries to access the data before the complete loading of the plugin's configuration.

          This fix provides an explicit locking of any load operations + manual cleanup of erroneous cache data, which goes to persistence in 1.8.1
          Resolves https://issues.jenkins-ci.org/browse/JENKINS-21044

          Signed-off-by: Oleg Nenashev <o.v.nenashev@gmail.com>

          SCM/JIRA link daemon added a comment - Code changed in jenkins User: Oleg Nenashev Path: src/main/java/hudson/plugins/throttleconcurrents/ThrottleJobProperty.java http://jenkins-ci.org/commit/throttle-concurrent-builds-plugin/9b7562d4b08e0a4202130d43307082553142df82 Log: [FIXED JENKINS-21044] - Throttling blocks the Jenkins queue Seems the issue was in improper usage of WeakHashMap (see analysis from @centic). I've managed to reproduce the behavior in the following case: There is a big number of jobs/configurations with throttling The builds queue is not empty // Seems that ThrottleQueueTaskDispatcher tries to access the data before the complete loading of the plugin's configuration. This fix provides an explicit locking of any load operations + manual cleanup of erroneous cache data, which goes to persistence in 1.8.1 Resolves https://issues.jenkins-ci.org/browse/JENKINS-21044 Signed-off-by: Oleg Nenashev <o.v.nenashev@gmail.com>

          Code changed in jenkins
          User: Oleg Nenashev
          Path:
          src/main/java/hudson/plugins/throttleconcurrents/ThrottleJobProperty.java
          http://jenkins-ci.org/commit/throttle-concurrent-builds-plugin/c453516716079248d74ce588efc0293669e6e1a7
          Log:
          JENKINS-21044 - Don't create a new HashMap after the load operation

          Signed-off-by: Oleg Nenashev <o.v.nenashev@gmail.com>

          SCM/JIRA link daemon added a comment - Code changed in jenkins User: Oleg Nenashev Path: src/main/java/hudson/plugins/throttleconcurrents/ThrottleJobProperty.java http://jenkins-ci.org/commit/throttle-concurrent-builds-plugin/c453516716079248d74ce588efc0293669e6e1a7 Log: JENKINS-21044 - Don't create a new HashMap after the load operation Signed-off-by: Oleg Nenashev <o.v.nenashev@gmail.com>

          Code changed in jenkins
          User: Andrew Bayer
          Path:
          src/main/java/hudson/plugins/throttleconcurrents/ThrottleJobProperty.java
          http://jenkins-ci.org/commit/throttle-concurrent-builds-plugin/70107b4222502935a9e46beffa31daae2e99e50b
          Log:
          Merge pull request #13 from synopsys-arc-oss/JENKINS_21044_fix

          [FIXED JENKINS-21044] - Throttling blocks the Jenkins queue

          Compare: https://github.com/jenkinsci/throttle-concurrent-builds-plugin/compare/dc16282a90b7...70107b422250

          SCM/JIRA link daemon added a comment - Code changed in jenkins User: Andrew Bayer Path: src/main/java/hudson/plugins/throttleconcurrents/ThrottleJobProperty.java http://jenkins-ci.org/commit/throttle-concurrent-builds-plugin/70107b4222502935a9e46beffa31daae2e99e50b Log: Merge pull request #13 from synopsys-arc-oss/JENKINS_21044_fix [FIXED JENKINS-21044] - Throttling blocks the Jenkins queue Compare: https://github.com/jenkinsci/throttle-concurrent-builds-plugin/compare/dc16282a90b7...70107b422250

          Andrew Bayer added a comment -

          Thanks, Oleg!

          Andrew Bayer added a comment - Thanks, Oleg!

            oleg_nenashev Oleg Nenashev
            gcummings Geoff Cummings
            Votes:
            9 Vote for this issue
            Watchers:
            18 Start watching this issue

              Created:
              Updated:
              Resolved: