• Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Major Major
    • okhttp-api-plugin
    • None
    • Jenkins 2.363

      We are getting over 900 "OkHttp ConnectionPool" threads running at any given time running in state "TIMED_WAITING".  This is a large corporate installation so we have a lot of builds running at any given time but only a handful of repo hosted on Github maybe 5 an hours so there is no reason we should have that many threads running for this plugin.  The server also is experiencing a once a week can't create new thread errors which I believe is caused by OkHttp using up all the thread counts.

          [JENKINS-69567] Okhttp Connection Pool

          Mark Waite added a comment -

          Please provide more details so that others can duplicate the problem. Refer to "How to report an issue" for the types of information that are needed.

          The okhttp api plugin is asked by other plugins to create threads. Those other plugins and operations they are performing are the likely cause of the issue.

          Mark Waite added a comment - Please provide more details so that others can duplicate the problem. Refer to "How to report an issue" for the types of information that are needed. The okhttp api plugin is asked by other plugins to create threads. Those other plugins and operations they are performing are the likely cause of the issue.

          Dennis Burns added a comment -

          How to duplicate the issue I don't know what causing it so I don't have steps to reproduce it.  Our jenkins is running on tomacat 8.5.82 on Redhat 7.9, java 11.   the Slaves are all running java 11, 4 of which are Redhat 7 and 4 are redhat 8 that is primarily used for build code -> unit test -> veracode -> SonarQube -> push to UrbanCode deploy most of which are building docker images.  Also we have 9 windows server primarily use to run Selenium testing on.  We do not run any builds on the master server.  Last crash yesterday morning and the Okta Connection pool grew throughout the day and peak out around 1,300 running threads.  Over night it drop down to 300 Threads.  As the developers started the day it was jump up to 2,000 threads and by the end of the work day today it running around 2,700 threads.  As for related error messages I am not seeing much in the logs but I did find some messages that might be related.

          310 of this warning 
          08-Sep-2022 09:15:36.444 WARNING [localhost-startStop-2] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [jenkins] appears to have started a thread named [OkHttp TaskRunner] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
           java.base@11.0.6/java.lang.Object.wait(Native Method)
           java.base@11.0.6/java.lang.Object.wait(Object.java:462)
           okhttp3.internal.concurrent.TaskRunner$RealBackend.coordinatorWait(TaskRunner.kt:294)
           okhttp3.internal.concurrent.TaskRunner.awaitTaskToRun(TaskRunner.kt:218)
           okhttp3.internal.concurrent.TaskRunner$runnable$1.run(TaskRunner.kt:59)
           java.base@11.0.6/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
           java.base@11.0.6/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
           java.base@11.0.6/java.lang.Thread.run(Thread.java:834)

          and then lots of these

          08-Sep-2022 09:15:36.469 SEVERE [localhost-startStop-2] org.apache.catalina.loader.WebappClassLoaderBase.checkThreadLocalMapForLeaks The web application [jenkins] created a ThreadLocal with key of type [java.lang.ThreadLocal.SuppliedThreadLocal] (value [java.lang.ThreadLocal$SuppliedThreadLocal@5c162085]) and a value of type [com.sun.jersey.api.client.Client] (value [com.sun.jersey.api.client.Client@5546c316]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.
          08-Sep-2022 09:15:36.471 SEVERE [localhost-startStop-2] org.apache.catalina.loader.WebappClassLoaderBase.checkThreadLocalMapForLeaks The web application [jenkins] created a ThreadLocal with key of type [java.lang.ThreadLocal.SuppliedThreadLocal] (value [java.lang.ThreadLocal$SuppliedThreadLocal@5c162085]) and a value of type [com.sun.jersey.api.client.Client] (value [com.sun.jersey.api.client.Client@7f4ad59d]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.

          and then crash

           

          I am not sure if the Okhttp connection pool is related but by the large volume of idle threads it doesn't seem right 

           

           

          Dennis Burns added a comment - How to duplicate the issue I don't know what causing it so I don't have steps to reproduce it.  Our jenkins is running on tomacat 8.5.82 on Redhat 7.9, java 11.   the Slaves are all running java 11, 4 of which are Redhat 7 and 4 are redhat 8 that is primarily used for build code -> unit test -> veracode -> SonarQube -> push to UrbanCode deploy most of which are building docker images.  Also we have 9 windows server primarily use to run Selenium testing on.  We do not run any builds on the master server.  Last crash yesterday morning and the Okta Connection pool grew throughout the day and peak out around 1,300 running threads.  Over night it drop down to 300 Threads.  As the developers started the day it was jump up to 2,000 threads and by the end of the work day today it running around 2,700 threads.  As for related error messages I am not seeing much in the logs but I did find some messages that might be related. 310 of this warning  08-Sep-2022 09:15:36.444 WARNING [localhost-startStop-2] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [jenkins] appears to have started a thread named [OkHttp TaskRunner] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:  java.base@11.0.6/java.lang.Object.wait(Native Method)  java.base@11.0.6/java.lang.Object.wait(Object.java:462)  okhttp3.internal.concurrent.TaskRunner$RealBackend.coordinatorWait(TaskRunner.kt:294)  okhttp3.internal.concurrent.TaskRunner.awaitTaskToRun(TaskRunner.kt:218)  okhttp3.internal.concurrent.TaskRunner$runnable$1.run(TaskRunner.kt:59)  java.base@11.0.6/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)  java.base@11.0.6/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)  java.base@11.0.6/java.lang.Thread.run(Thread.java:834) and then lots of these 08-Sep-2022 09:15:36.469 SEVERE [localhost-startStop-2] org.apache.catalina.loader.WebappClassLoaderBase.checkThreadLocalMapForLeaks The web application [jenkins] created a ThreadLocal with key of type [java.lang.ThreadLocal.SuppliedThreadLocal] (value [java.lang.ThreadLocal$SuppliedThreadLocal@5c162085] ) and a value of type [com.sun.jersey.api.client.Client] (value [com.sun.jersey.api.client.Client@5546c316] ) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak. 08-Sep-2022 09:15:36.471 SEVERE [localhost-startStop-2] org.apache.catalina.loader.WebappClassLoaderBase.checkThreadLocalMapForLeaks The web application [jenkins] created a ThreadLocal with key of type [java.lang.ThreadLocal.SuppliedThreadLocal] (value [java.lang.ThreadLocal$SuppliedThreadLocal@5c162085] ) and a value of type [com.sun.jersey.api.client.Client] (value [com.sun.jersey.api.client.Client@7f4ad59d] ) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak. and then crash   I am not sure if the Okhttp connection pool is related but by the large volume of idle threads it doesn't seem right     

            bitwiseman Liam Newman
            dburns1976 Dennis Burns
            Votes:
            1 Vote for this issue
            Watchers:
            3 Start watching this issue

              Created:
              Updated: