Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-75438

Severe performance issues due to new icon feature

    • Icon: Bug Bug
    • Resolution: Fixed
    • Icon: Major Major
    • warnings-ng-plugin
    • None
    • warnings-ng 12.4.1
      analysis-model-api 13.2.0
      jenkins 2.492.2-lts.jdk21

      After recent jenkins & plugins updates we have encountered severe performance issues on blueocean and normal job listings. These issues show as long loadtimes (more than a minute) of the branch lists and blueocean UI and occur on multibranch jobs with many branches (200+) that have issue scanning with the warnings-ng plugin enabled.

      I've analyzed memory and allocation profiles of our jenkins controller and it lists getParserId as the main culprit:

      This high CPU usage is also accompanied by high memory allocations and gc churn.

      (note: images are not necessarily of the same time range)

      I've downgraded the warnings-ng plugin to 11.12.0 and analysis-model-api to 12.9.1. This has completely resolved both the cpu usage and memory allocation issues.

        1. 2025-03-17 11_15_40-Window.png
          90 kB
          Christian
        2. 2025-03-13 19_46_59-Window.png
          74 kB
          Christian

          [JENKINS-75438] Severe performance issues due to new icon feature

          Mark Waite added a comment - - edited

          Can you provide more details that describe how you have enabled issue scanning with the warnings plugin? My initial guesses trying to duplicate the problem were not successful.

          I've created over 200 branches on https://ci.jenkins.io/job/Plugins/job/xshell-plugin/ and am allowing it to run the jobs on those branches to completion. The load of starting and running 200 concurrent jobs that each require a Windows and a Linux agent for 2-4 minutes is certainly slowing response time, but it is a matter of seconds delay.

          Those jobs are using recordIssues(). The script they are running is https://github.com/jenkins-infra/pipeline-library/blob/master/vars/buildPlugin.groovy

          Now that the 200 jobs have all completed, when I open the jobs page, it takes about 15 seconds to finish loading the entire page. The page is already displaying in the first few seconds, but the "page loading' indicator in the browser tab shows that data is still arriving. The blue ocean page needs about 10 seconds to finish loading. A specific Pipeline run loads immediately.

          Mark Waite added a comment - - edited Can you provide more details that describe how you have enabled issue scanning with the warnings plugin? My initial guesses trying to duplicate the problem were not successful. I've created over 200 branches on https://ci.jenkins.io/job/Plugins/job/xshell-plugin/ and am allowing it to run the jobs on those branches to completion. The load of starting and running 200 concurrent jobs that each require a Windows and a Linux agent for 2-4 minutes is certainly slowing response time, but it is a matter of seconds delay. Those jobs are using recordIssues(). The script they are running is https://github.com/jenkins-infra/pipeline-library/blob/master/vars/buildPlugin.groovy Now that the 200 jobs have all completed, when I open the jobs page , it takes about 15 seconds to finish loading the entire page. The page is already displaying in the first few seconds, but the "page loading' indicator in the browser tab shows that data is still arriving. The blue ocean page needs about 10 seconds to finish loading. A specific Pipeline run loads immediately.

          Christian added a comment -

          We use the following step to scan for issues from a declarative jenkins pipeline:

          recordIssues enabledForFailure: true, quiet: true,
            tools: [checkStyle(pattern: '**/build/reports/checkstyle/*.xml'), junitParser(
              pattern: 'frontend/coverage/**/junit.xml,**/build/test-results/**/*.xml,**/build/reports/ruleTests/**/*.xml'),
                    java(), kotlin()] 

          Many of the Branches where already scanned before upgrading to the warnings-ng version where we experienced the issues.
          FlameGraph.zipcontains full cpu and allcation flame graphs in svg form.

          Is there any additional information that I could provide that can help with reproduction / analysis?

          Christian added a comment - We use the following step to scan for issues from a declarative jenkins pipeline: recordIssues enabledForFailure: true , quiet: true , tools: [checkStyle(pattern: '**/build/reports/checkstyle/*.xml' ), junitParser( pattern: 'frontend/coverage /**/ junit.xml,**/build/test-results /**/ *.xml,**/build/reports/ruleTests /**/ *.xml' ), java(), kotlin()] Many of the Branches where already scanned before upgrading to the warnings-ng version where we experienced the issues. FlameGraph.zip contains full cpu and allcation flame graphs in svg form. Is there any additional information that I could provide that can help with reproduction / analysis?

          Fredrik added a comment -

          We're experiencing similar sever performance issues on some multibranchPipelineJobs . The job in particular has a relatively high number in the "#issues" column (16000+). So my thought was there is something surrounding rendering that data.

          We upgraded from 11.12.0 to 12.4.1. 

          Fredrik added a comment - We're experiencing similar sever performance issues on some multibranchPipelineJobs . The job in particular has a relatively high number in the "#issues" column (16000+). So my thought was there is something surrounding rendering that data. We upgraded from 11.12.0 to 12.4.1. 

          We see the same performance issue when there is a high number of issues (60.000+).

          Stefan Spieker added a comment - We see the same performance issue when there is a high number of issues (60.000+).

          I can confirm that the issue is fixed with 12.5.0! Thanks a lot, drulli 

          Stefan Spieker added a comment - I can confirm that the issue is fixed with 12.5.0! Thanks a lot, drulli  

            drulli Ulli Hafner
            c_fraenkel Christian
            Votes:
            1 Vote for this issue
            Watchers:
            4 Start watching this issue

              Created:
              Updated:
              Resolved: