Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-66506

Ever increasing number of threads in the metrics plugin

      During performance testing, we see that the number of thread that always increases, when it should not as the number of concurrent user is stable (= 1) and the job queue is also a fixed number.

      Looking at thread dump, hundred of thread are:

      "QueueSubTaskMetrics [#6]" Id=350 Group=main WAITING on hudson.model.queue.FutureImpl@6f7dbdac"QueueSubTaskMetrics [#6]" Id=350 Group=main WAITING on hudson.model.queue.FutureImpl@6f7dbdac at java.lang.Object.wait(Native Method) -  waiting on hudson.model.queue.FutureImpl@6f7dbdac at java.lang.Object.wait(Object.java:502) at hudson.remoting.AsyncFutureImpl.get(AsyncFutureImpl.java:79) at jenkins.metrics.impl.JenkinsMetricProviderImpl.lambda$asSupplier$3(JenkinsMetricProviderImpl.java:1142) at jenkins.metrics.impl.JenkinsMetricProviderImpl$$Lambda$670/282561367.get(Unknown Source) at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Number of locked synchronizers = 1 - java.util.concurrent.ThreadPoolExecutor$Worker@22c49544 

      an unbounded thread pool is suspected here: https://github.com/jenkinsci/metrics-plugin/blob/21e83be64f85d343c3c9b0e0b0956021d74ade95/src/main/java/jenkins/metrics/impl/JenkinsMetricProviderImpl.java#L836-L839

      jstack and top -H output see attachments.

       

        1. top-dash-h.txt
          277 kB
        2. jstack2.txt
          3.41 MB

          [JENKINS-66506] Ever increasing number of threads in the metrics plugin

          megathaum created issue -
          megathaum made changes -
          Description Original: During performance testing, we see that the number of thread that always increases, when it should not as the number of concurrent user is stable (= 1) and the job queue is also a fixed number.

          Looking at thread dump, hundred of thread are:
          {code:java}
          "QueueSubTaskMetrics [#6]" Id=350 Group=main WAITING on hudson.model.queue.FutureImpl@6f7dbdac"QueueSubTaskMetrics [#6]" Id=350 Group=main WAITING on hudson.model.queue.FutureImpl@6f7dbdac at java.lang.Object.wait(Native Method) -  waiting on hudson.model.queue.FutureImpl@6f7dbdac at java.lang.Object.wait(Object.java:502) at hudson.remoting.AsyncFutureImpl.get(AsyncFutureImpl.java:79) at jenkins.metrics.impl.JenkinsMetricProviderImpl.lambda$asSupplier$3(JenkinsMetricProviderImpl.java:1142) at jenkins.metrics.impl.JenkinsMetricProviderImpl$$Lambda$670/282561367.get(Unknown Source) at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Number of locked synchronizers = 1 - java.util.concurrent.ThreadPoolExecutor$Worker@22c49544 {code}
          an unbounded thread pool is suspected here: [https://github.com/jenkinsci/metrics-plugin/blob/21e83be64f85d343c3c9b0e0b0956021d74ade95/src/main/java/jenkins/metrics/impl/JenkinsMetricProviderImpl.java#L836-L839]

           

           
          New: During performance testing, we see that the number of thread that always increases, when it should not as the number of concurrent user is stable (= 1) and the job queue is also a fixed number.

          Looking at thread dump, hundred of thread are:
          {code:java}
          "QueueSubTaskMetrics [#6]" Id=350 Group=main WAITING on hudson.model.queue.FutureImpl@6f7dbdac"QueueSubTaskMetrics [#6]" Id=350 Group=main WAITING on hudson.model.queue.FutureImpl@6f7dbdac at java.lang.Object.wait(Native Method) -  waiting on hudson.model.queue.FutureImpl@6f7dbdac at java.lang.Object.wait(Object.java:502) at hudson.remoting.AsyncFutureImpl.get(AsyncFutureImpl.java:79) at jenkins.metrics.impl.JenkinsMetricProviderImpl.lambda$asSupplier$3(JenkinsMetricProviderImpl.java:1142) at jenkins.metrics.impl.JenkinsMetricProviderImpl$$Lambda$670/282561367.get(Unknown Source) at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Number of locked synchronizers = 1 - java.util.concurrent.ThreadPoolExecutor$Worker@22c49544 {code}
          an unbounded thread pool is suspected here: [https://github.com/jenkinsci/metrics-plugin/blob/21e83be64f85d343c3c9b0e0b0956021d74ade95/src/main/java/jenkins/metrics/impl/JenkinsMetricProviderImpl.java#L836-L839]

          {{jstack and top -H output see attachments.}}

           
          Jesse Glick made changes -
          Link New: This issue is duplicated by JENKINS-66941 [ JENKINS-66941 ]
          Jesse Glick made changes -
          Link New: This issue depends on JENKINS-66947 [ JENKINS-66947 ]
          Jesse Glick made changes -
          Assignee New: Jesse Glick [ jglick ]
          Jesse Glick made changes -
          Status Original: Open [ 1 ] New: In Progress [ 3 ]
          Jesse Glick made changes -
          Status Original: In Progress [ 3 ] New: In Review [ 10005 ]
          Jesse Glick made changes -
          Remote Link New: This issue links to "metrics-plugin PR-127 (Web Link)" [ 27108 ]
          Jesse Glick made changes -
          Released As New: https://github.com/jenkinsci/metrics-plugin/releases/tag/metrics-4.1.6.1
          Resolution New: Fixed [ 1 ]
          Status Original: In Review [ 10005 ] New: Resolved [ 5 ]
          Jesse Glick made changes -
          Link New: This issue causes JENKINS-69817 [ JENKINS-69817 ]

            jglick Jesse Glick
            megathaum megathaum
            Votes:
            1 Vote for this issue
            Watchers:
            5 Start watching this issue

              Created:
              Updated:
              Resolved: