-
Bug
-
Resolution: Fixed
-
Blocker
-
None
-
Powered by SuggestiMate -
1.15.4
I saw closed issue https://issues.jenkins-ci.org/browse/JENKINS-42136, but the problem still occurs for us (even with the newest plugin version).
Stacktrace:
java.util.concurrent.RejectedExecutionException: Task okhttp3.RealCall$AsyncCall@4b0f8dab rejected from java.util.concurrent.ThreadPoolExecutor@71ed53eb[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 7] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at okhttp3.Dispatcher.enqueue(Dispatcher.java:130) at okhttp3.RealCall.enqueue(RealCall.java:100) at okhttp3.internal.ws.RealWebSocket.connect(RealWebSocket.java:183) at okhttp3.OkHttpClient.newWebSocket(OkHttpClient.java:436) at io.fabric8.kubernetes.client.dsl.internal.PodOperationsImpl.exec(PodOperationsImpl.java:267) at io.fabric8.kubernetes.client.dsl.internal.PodOperationsImpl.exec(PodOperationsImpl.java:61) at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.doLaunch(ContainerExecDecorator.java:319) at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.launch(ContainerExecDecorator.java:237) at hudson.Launcher$ProcStarter.start(Launcher.java:455) at org.jenkinsci.plugins.durabletask.BourneShellScript.launchWithCookie(BourneShellScript.java:188) at org.jenkinsci.plugins.durabletask.FileMonitoringTask.launch(FileMonitoringTask.java:99) at org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep$Execution.start(DurableTaskStep.java:278) at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:270) at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:178) at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:122) at sun.reflect.GeneratedMethodAccessor416.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93) at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1213) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1022) at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:42) at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113) at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:157) at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:23) at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:155) at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:155) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:159) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:129) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:129) at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:17) Caused: java.io.IOException: Connection was rejected, you should increase the Max connections to Kubernetes API at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.doLaunch(ContainerExecDecorator.java:329) at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.launch(ContainerExecDecorator.java:237) at hudson.Launcher$ProcStarter.start(Launcher.java:455) at org.jenkinsci.plugins.durabletask.BourneShellScript.launchWithCookie(BourneShellScript.java:188) at org.jenkinsci.plugins.durabletask.FileMonitoringTask.launch(FileMonitoringTask.java:99) at org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep$Execution.start(DurableTaskStep.java:278) at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:270) at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:178) at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:122) at sun.reflect.GeneratedMethodAccessor416.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93) at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1213) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1022) at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:42) at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113) at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:157) at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:23) at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:155) at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:155) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:159) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:129) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:129) at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:17) at WorkflowScript.run(WorkflowScript:100) at ___cps.transform___(Native Method) at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:57) at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:109) at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:82) at sun.reflect.GeneratedMethodAccessor304.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72) at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:103) at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:82) at sun.reflect.GeneratedMethodAccessor304.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72) at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:60) at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:109) at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:82) at sun.reflect.GeneratedMethodAccessor304.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72) at com.cloudbees.groovy.cps.impl.ConstantBlock.eval(ConstantBlock.java:21) at com.cloudbees.groovy.cps.Next.step(Next.java:83) at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:174) at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:163) at org.codehaus.groovy.runtime.GroovyCategorySupport$ThreadCategoryInfo.use(GroovyCategorySupport.java:129) at org.codehaus.groovy.runtime.GroovyCategorySupport.use(GroovyCategorySupport.java:268) at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:163) at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$101(SandboxContinuable.java:34) at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.lambda$run0$0(SandboxContinuable.java:59) at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.GroovySandbox.runInSandbox(GroovySandbox.java:108) at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:58) at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:182) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:332) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$200(CpsThreadGroup.java:83) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:244) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:232) at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:64) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:131) at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28) at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:59) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Finished: FAILURE
We noticed that issue exists, since you added kubernetes clients cache. It seems, that you close the kubernetes client (so also dispatcher executor service), which is still available in the cache.
- is duplicated by
-
JENKINS-56673 Better handling of ChannelClosedException in Declarative pipeline
-
- Resolved
-
- is related to
-
JENKINS-42136 shared-library abstraction causing RejectedExecutionException when running sh() commands
-
- Resolved
-
- links to
[JENKINS-55392] java.util.concurrent.RejectedExecutionException AGAIN
I attached a segment with all the calls related to a failed build that showed this error.
jequals5 I'm happy to help track this down, if you give me some pointers on how and where to look. Let me know if I can be helpful on this issue. I'm just not really sure where to start.
I’m syncing with the plugin maintainer in an effort to ensure what I think is the root cause is mutual and the best way forward.
Given this plugin has wide adoption, it is always best to ensure there is a uniformed approach
please provide the kubernetes version, cloud provider, and the master logs at finest level https://github.com/jenkinsci/kubernetes-plugin/#debugging
I would look for logs that show if the client is getting closed and relate those to the
https://github.com/jenkinsci/kubernetes-plugin/blob/master/src/main/java/org/csanchez/jenkins/plugins/kubernetes/KubernetesClientProvider.java#L49
Specially check for "forcing close" entries
I have a test that reproduces the race condition at https://github.com/jenkinsci/kubernetes-plugin/pull/418
csanchez Since you have reproduced this, do you still need the log data you mentioned in a previous post?
I just read the notes in the PR you linked to above. In my case, we run out of connections when there is only one build Pod in some cases. This seems to suggest that there are lots of pending "exec" calls against Pods that no longer exist. Can they all just be deleted when the Pod is cleaned up at the end of a build?
Yes I need the logs because we need to figure out when and why the connections are being closed
Only thing I can reproduce is that execs on a closed connection cause the same exception
Jenkins ver. 2.150.1 with kubernetes plugin 1.14.3
/ # kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.4", GitCommit:"5ca598b4ba5abb89bb773071ce452e33fb66339d", GitTreeState:"clean", BuildDate:"2018-06-06T08:00:59Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
We self-host on openstack in our datacenter. Kubernetes is deployed on CentOS7 using kubespray.
When you say "master logs" do you mean the apiserver logs?
Nevermind that last question. I get it now that you're asking about the Jenkins master. I'll gather some logs today and attach them to this issue.
I added the logger, but the Jenkins webui only keeps the last few hundred lines, so sometimes I can't copy and paste all the logs before they are gone. Is there some better way to capture these logs?
From what I see so far, I don't see any logs about "force close", or "force" anything. I also don't see any logs that say "Removing entry", which it seems should be in the log you linked to. I'm not sure if I'm doing this right, so I may need more instructions.
I do see this repeating message, but I'm not sure it's related.
Started Purge expired KubernetesClients
Jan 17, 2019 4:42:17 PM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesClientProvider gracefulClose
Not closing io.fabric8.kubernetes.client.DefaultKubernetesClient@6f6f5e02: there are still running (10) or queued (0) calls
Jan 17, 2019 4:42:17 PM FINE org.csanchez.jenkins.plugins.kubernetes.KubernetesClientProvider$PurgeExpiredKubernetesClients
Finished Purge expired KubernetesClients. 3 ms
Jan 17, 2019 4:43:17 PM FINE org.csanchez.jenkins.plugins.kubernetes.KubernetesClientProvider$PurgeExpiredKubernetesClients
Started Purge expired KubernetesClients
Jan 17, 2019 4:43:17 PM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesClientProvider gracefulClose
Not closing io.fabric8.kubernetes.client.DefaultKubernetesClient@6f6f5e02: there are still running (10) or queued (0) calls
Jan 17, 2019 4:43:17 PM FINE org.csanchez.jenkins.plugins.kubernetes.KubernetesClientProvider$PurgeExpiredKubernetesClients
Finished Purge expired KubernetesClients. 1 ms
Jan 17, 2019 4:44:17 PM FINE org.csanchez.jenkins.plugins.kubernetes.KubernetesClientProvider$PurgeExpiredKubernetesClients
I'm still not completely sure how to get the exact logs you want, but I ran into this issue again yesterday and grabbed all the logs that were available at the time. They are attached.
If you need something else, please provide more instructions.
what was the agent name of the build that failed?
app-ui-build-common-b0b6518b-d72a-4826-a165-719aca99e295--9cgzg
or app-ui-build-common-6bc9afa4-13cf-43a6-af59-338a80dda163--9h6d6
I just checked both of those builds and they both failed for this same reason.
Same here, please let us know if we can provide further input to help resolve this bug.
Hi, everyone.
Occurring for me too....
Maybe any workarounds? (downgrade version of plugin...)
Thanks in advance for any tips
This problem hurts very much. I'd be happy to help by providing logs.
Encountered it with plugin versions 1.14.3, 1.14.5 on Jenkins 2.165.
We have pod templates defined in the config.xml file, and ones defined in the pipeline code.
The ones in config.xml contain only the 'jnlp' container, while the ones defined in pipeline contain two containers.
This problem seems only to effect the ones defined in the pipelines (either because of their place of definition, or the different number of containers).
what I would need is any meaningful logs from KubernetesClientProvider, there's none in the attachments, so I have no way to see if the client is being closed there (as I think it is) or the issue is somewhere else
We're experiencing this quite often, I'll try to gather a sanitized log for you to review today. And to answer your latest question, we do execute long running scripts in our pipeline. Like kicking off a suite of integration tests that can last 10+ minutes (not sure if that's considered long running).
are you using the containerLog step ? seems that step is closing the connections when it shouldn't now
We see this error very frequently. This exception stack trace appears in the job console output. However when I see the kubernetes plugin logs (Our log level is ALL) I don't find any related error / exception. How do I find co-relating error / messages in kubernetes logs?
Here's an example with containerLog that reproduces the issue on my setup. First run gives me the expected failure "No such file or directory". Run it a second time though and you get the dreaded "RejectedExecutionException". Kubernetes jobs are then broken until you reset by changing "Max connections to Kubernetes API" or restarting Jenkins.
def label = "mypod-${UUID.randomUUID().toString()}" podTemplate(label: label, containers: [ containerTemplate(name: 'maven', image: 'maven', command: 'cat', ttyEnabled: true) ]) { node(label) { container('maven') { sh "echo THIS IS A TEST" } containerLog(name: 'maven', returnLog: true) sh "cd force-error" } }
I'm aware of the issue with containerLog, is there anybody else having the issue that is not using containerLog ?
We are having the same problem and we do not use containerLog in our jenkins instance
My suspect is that it has something to do with failing shell steps not exiting correctly
We also do not use {containerLog} and still are impacted by this issue.
For those impacted by this problem; I downgraded to a previous version of the plugin. Luckily that fixed the problem for me.
Hi Ahmed, what version are you using right now ? I have the same issue as others using version 1.14.5. What is strange, we two instances of Jenkins with kubernetes plugin. But the issue occurs only on one of them... which is weird.
csanchez I think the fix in 4.1.8 only addresses the issue caused by ContainerLog step. Will the same fix help if ContainerLog step is not getting used? In out case, we are impacted by this bug and our jenkinsfile is not using ContainerLog, I'm trying to find will the fix help in our scenario?
Guys, after I upgraded my Jenkins to the newer version (2.165 -> 2.167) and kubernetes-plugin (1.14.5 -> 1.14.8) it seems to be back stable... I no longer see any errors like "
20:22:05 java.io.IOException: Connection was rejected, you should increase the Max connections to Kubernetes API". Which is promising
Jenkins-2.150.3, kubernetes-2.14.8
We see slightly different exception
[Pipeline] sh 10:50:54 java.io.InterruptedIOException: executor rejected 10:50:54 at okhttp3.RealCall$AsyncCall.executeOn(RealCall.java:185) 10:50:54 at okhttp3.Dispatcher.promoteAndExecute(Dispatcher.java:186) 10:50:54 at okhttp3.Dispatcher.enqueue(Dispatcher.java:137) 10:50:54 at okhttp3.RealCall.enqueue(RealCall.java:126) 10:50:54 at okhttp3.internal.ws.RealWebSocket.connect(RealWebSocket.java:193) 10:50:54 at okhttp3.OkHttpClient.newWebSocket(OkHttpClient.java:435) 10:50:54 at io.fabric8.kubernetes.client.dsl.internal.PodOperationsImpl.exec(PodOperationsImpl.java:274) 10:50:54 at io.fabric8.kubernetes.client.dsl.internal.PodOperationsImpl.exec(PodOperationsImpl.java:58) 10:50:54 at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.doLaunch(ContainerExecDecorator.java:333) 10:50:54 at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.launch(ContainerExecDecorator.java:246) 10:50:54 at hudson.Launcher$ProcStarter.start(Launcher.java:455) 10:50:54 at org.jenkinsci.plugins.durabletask.BourneShellScript.launchWithCookie(BourneShellScript.java:206) 10:50:54 at org.jenkinsci.plugins.durabletask.FileMonitoringTask.launch(FileMonitoringTask.java:99) 10:50:54 at org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep$Execution.start(DurableTaskStep.java:305) 10:50:54 at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:268) 10:50:54 at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:176) 10:50:54 at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:122) 10:50:54 at sun.reflect.GeneratedMethodAccessor550.invoke(Unknown Source) 10:50:54 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 10:50:54 at java.lang.reflect.Method.invoke(Method.java:498) 10:50:54 at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93) 10:50:54 at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325) 10:50:54 at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1213) 10:50:54 at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1022) 10:50:54 at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:42) 10:50:54 at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48) 10:50:54 at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113) 10:50:54 at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:157) 10:50:54 at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:23) 10:50:54 at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:155) 10:50:54 at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:142) 10:50:54 at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:155) 10:50:54 at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:159) 10:50:54 at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:17) 10:50:54 at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:57) 10:50:54 at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:109) 10:50:54 at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:82) 10:50:54 at sun.reflect.GeneratedMethodAccessor179.invoke(Unknown Source) 10:50:54 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 10:50:54 at java.lang.reflect.Method.invoke(Method.java:498) 10:50:54 at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72) 10:50:54 at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:103) 10:50:54 at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:82) 10:50:54 at sun.reflect.GeneratedMethodAccessor179.invoke(Unknown Source) 10:50:54 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 10:50:54 at java.lang.reflect.Method.invoke(Method.java:498) 10:50:54 at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72) 10:50:54 at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:60) 10:50:54 at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:109) 10:50:54 at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:82) 10:50:54 at sun.reflect.GeneratedMethodAccessor179.invoke(Unknown Source) 10:50:54 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 10:50:54 at java.lang.reflect.Method.invoke(Method.java:498) 10:50:54 at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72) 10:50:54 at com.cloudbees.groovy.cps.impl.ConstantBlock.eval(ConstantBlock.java:21) 10:50:54 at com.cloudbees.groovy.cps.Next.step(Next.java:83) 10:50:54 at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:174) 10:50:54 at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:163) 10:50:54 at org.codehaus.groovy.runtime.GroovyCategorySupport$ThreadCategoryInfo.use(GroovyCategorySupport.java:129) 10:50:54 at org.codehaus.groovy.runtime.GroovyCategorySupport.use(GroovyCategorySupport.java:268) 10:50:54 at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:163) 10:50:54 at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$101(SandboxContinuable.java:34) 10:50:54 at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.lambda$run0$0(SandboxContinuable.java:59) 10:50:54 at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.GroovySandbox.runInSandbox(GroovySandbox.java:121) 10:50:54 at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:58) 10:50:54 at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:182) 10:50:54 at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:332) 10:50:54 at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$200(CpsThreadGroup.java:83) 10:50:54 at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:244) 10:50:54 at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:232) 10:50:54 at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:64) 10:50:54 at java.util.concurrent.FutureTask.run(FutureTask.java:266) 10:50:54 at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:131) 10:50:54 at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28) 10:50:54 at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:59) 10:50:54 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 10:50:54 at java.util.concurrent.FutureTask.run(FutureTask.java:266) 10:50:54 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 10:50:54 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 10:50:54 at java.lang.Thread.run(Thread.java:748) 10:50:54 Caused by: java.util.concurrent.RejectedExecutionException: Task okhttp3.RealCall$AsyncCall@2ad3f18f rejected from java.util.concurrent.ThreadPoolExecutor@c9108da[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 13825] 10:50:54 at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) 10:50:54 at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) 10:50:54 at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) 10:50:54 at okhttp3.RealCall$AsyncCall.executeOn(RealCall.java:182) 10:50:54 ... 79 more [Pipeline] } [Pipeline] // stage [Pipeline] echo 10:50:54 No response
We still see similar issues:
kubernetes plugin 2.14.9, jenkins 2.164.1
(with Kubernetes API connections set to insanely high amount, as well as read and connect timeout set to '600')
java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231) at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.doLaunch(ContainerExecDecorator.java:348) at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.launch(ContainerExecDecorator.java:246) at hudson.Launcher$ProcStarter.start(Launcher.java:455) at org.jenkinsci.plugins.durabletask.BourneShellScript.launchWithCookie(BourneShellScript.java:206) at org.jenkinsci.plugins.durabletask.FileMonitoringTask.launch(FileMonitoringTask.java:99) at org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep$Execution.start(DurableTaskStep.java:305) at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:268) at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:176) at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:122) at sun.reflect.GeneratedMethodAccessor419.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93) at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1213) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1022) at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:42) at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113) at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:158) at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:23) at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:155) at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:156) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:160) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:130) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:130) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:130) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:130) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:130) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:130) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:130) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:130) at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:17) Also: java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231) at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.doLaunch(ContainerExecDecorator.java:348) at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.launch(ContainerExecDecorator.java:246) at hudson.Launcher$ProcStarter.start(Launcher.java:455) at org.jenkinsci.plugins.durabletask.BourneShellScript.launchWithCookie(BourneShellScript.java:206) at org.jenkinsci.plugins.durabletask.FileMonitoringTask.launch(FileMonitoringTask.java:99) at org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep$Execution.start(DurableTaskStep.java:305) at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:268) at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:176) at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:122) at sun.reflect.GeneratedMethodAccessor419.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93) at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1213) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1022) at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:42) at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113) at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:158) at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:23) at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:155) at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:156) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:160) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:130) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:130) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:130) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:130) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:130) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:130) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:130) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:130) at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:17) Caused: java.io.IOException: Interrupted while waiting for websocket connection, you should increase the Max connections to Kubernetes API at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.doLaunch(ContainerExecDecorator.java:351) at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.launch(ContainerExecDecorator.java:246) at hudson.Launcher$ProcStarter.start(Launcher.java:455) at org.jenkinsci.plugins.durabletask.BourneShellScript.launchWithCookie(BourneShellScript.java:206) at org.jenkinsci.plugins.durabletask.FileMonitoringTask.launch(FileMonitoringTask.java:99) at org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep$Execution.start(DurableTaskStep.java:305) at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:268) at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:176) at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:122) at sun.reflect.GeneratedMethodAccessor419.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93) at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1213) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1022) at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:42) at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113) at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:158) at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:23) at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:155) at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:156) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:160) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:130) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:130) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:130) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:130) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:130) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:130) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:130) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:130) at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:17) at WorkflowScript.run(WorkflowScript:287) Caused: java.io.IOException: Interrupted while waiting for websocket connection, you should increase the Max connections to Kubernetes API at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.doLaunch(ContainerExecDecorator.java:351) at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.launch(ContainerExecDecorator.java:246) at hudson.Launcher$ProcStarter.start(Launcher.java:455) at org.jenkinsci.plugins.durabletask.BourneShellScript.launchWithCookie(BourneShellScript.java:206) at org.jenkinsci.plugins.durabletask.FileMonitoringTask.launch(FileMonitoringTask.java:99) at org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep$Execution.start(DurableTaskStep.java:305) at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:268) at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:176) at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:122) at sun.reflect.GeneratedMethodAccessor419.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93) at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1213) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1022) at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:42) at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113) at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:158) at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:23) at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:155) at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:156) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:160) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:130) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:130) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:130) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:130) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:130) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:130) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:130) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:130) at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:17) at WorkflowScript.run(WorkflowScript:277) at ___cps.transform___(Native Method) at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:57) at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:109) at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:82) at sun.reflect.GeneratedMethodAccessor407.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72) at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:103) at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:82) at sun.reflect.GeneratedMethodAccessor407.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72) at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:60) at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:109) at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:82) at sun.reflect.GeneratedMethodAccessor407.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72) at com.cloudbees.groovy.cps.impl.ConstantBlock.eval(ConstantBlock.java:21) at com.cloudbees.groovy.cps.Next.step(Next.java:83) at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:174) at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:163) at org.codehaus.groovy.runtime.GroovyCategorySupport$ThreadCategoryInfo.use(GroovyCategorySupport.java:129) at org.codehaus.groovy.runtime.GroovyCategorySupport.use(GroovyCategorySupport.java:268) at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:163) at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$101(SandboxContinuable.java:34) at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.lambda$run0$0(SandboxContinuable.java:59) at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.GroovySandbox.runInSandbox(GroovySandbox.java:136) at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:58) at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:182) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:332) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$200(CpsThreadGroup.java:83) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:244) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:232) at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:64) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:131) at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28) at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:59) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)
or shall I open a separate ticket to track this?
In order to get past this issue we tried to upgrade the plugin version to the latest 1.14.9
Caused: java.io.IOException: Connection was rejected, you should increase the Max connections to Kubernetes API at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.doLaunch(ContainerExecDecorator.java:329) at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.launch(ContainerExecDecorator.java:237)
Yesterday when started testing in our staging instance we are facing a different issue . We are using this plugin to connect to our Kubernetes platform running from openshift and we heavily use Jenkins pipelines in our setup and strangely we are seeing for every pipeline job that gets kicked off we are noticing that there are two pods getting spun up instead of just one. Build continues to use the first pod and terminates however second pod is still active and once timeout period is elapsed second pod terminates.
We are wondering about the change in behavior. It seems like we are trying to solve one problem and introducing another and skeptical to use the latest version of plugin in our production Jenkins instance.
Any idea about this latest change ?
Thanks
Ravi
Hello Ravi,
I also noticed the same behavior on one of our builds jobs today. This bug with duplicate pods already existed in an older version of plugin and was already fixed I think.
Updating this plugin is always a roller coaster ride
Not sure if anyone else is facing this problem When ever we are running shell commands from inside a running container we see this problem of termination of pipeline
Caused: java.io.IOException: Connection was rejected, you should increase the Max connections to Kubernetes API
Pattern we tried so far.
- Reset the Max API connection numbers from default 32 to a higher number ( did not fix the problem)
- Restarted the Jenkins instance that did not fix either
- We tried to focus on the stage where the pipeline failed and it was executing the shell command once we commented the shell commands the replayed the pipeline build was successful
- After this we commentated the shell command instructions this time it went through
Not sure if was fluke problem or some other behavior with the plugin that we are not aware of We are not confident to upgrade the plugin from current 1.13.8 version that we are running in our production instance.
Is anyone still experiencing this issue what is version we need to upgrade to avoid this particular issue ?
Thank you all for any feed backs.
Ravi.
any update on the problem?
We still see similar issues:
kubernetes plugin 1.14.9, jenkins 2.170
[Pipeline] { (Delete old jobs) [Pipeline] sh java.io.InterruptedIOException: executor rejected at okhttp3.RealCall$AsyncCall.executeOn(RealCall.java:185) at okhttp3.Dispatcher.promoteAndExecute(Dispatcher.java:186)... Caused by: java.util.concurrent.RejectedExecutionException: Task okhttp3.RealCall$AsyncCall@192d2ae5 rejected from java.util.concurrent.ThreadPoolExecutor@698d8fe9[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 228] ... hudson.remoting.ProxyException: io.fabric8.kubernetes.client.KubernetesClientException: No response at io.fabric8.kubernetes.client.dsl.internal.ExecWebSocketListener.onFailure(ExecWebSocketListener.java:230) at okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:571) at okhttp3.internal.ws.RealWebSocket$2.onFailure(RealWebSocket.java:221) at okhttp3.RealCall$AsyncCall.executeOn(RealCall.java:188) at okhttp3.Dispatcher.promoteAndExecute(Dispatcher.java:186)
kubernetes plugin 1.14.9
Please do not use JIRA to discuss issues present in obsolete software versions.
We are running into the same issue with version 1.17.2 with no `containerLog` steps used - every time is happens it's on `sh` step. Increasing max connections, even to absurd numbers (100k or more) does not help at all. If I can provide any logs or information that could be useful in finally solving that please let me know.
karolgil better to file a fresh bug report with as much detail as you can and Link it to this one.
I think I solved it in my case, maybe it'll help others*.*
TL;DR; If your API tokens have limited lifetime then decrease cache expiry time.
As documented in README of this plugin EKS (it's our case) clusters use aws-iam-authenticator and tokens grantend to kubernetes clients have limited lifetime (15 minutes). The default expiry time for clients in the plugin is 60 minutes which means that plugin can create and cache client, which will become unauthorized in 15 minutes. I've decrease cache expiry to 15 seconds and clients purge time to 120 seconds (to avoid keeping all those expired in memory) via system properties:
{{ -Dorg.csanchez.jenkins.plugins.kubernetes.clients.expiredClientsPurgeTime=120}}
{{ -Dorg.csanchez.jenkins.plugins.kubernetes.clients.cacheExpiration=30}}
karolgil that topic is discussed in JENKINS-58143. I am not aware of any mechanism by which it could cause the RejectedExecutionException, but perhaps that is one failure mode.
I'm not sure if this is the case - the fact is that we saw RejectedExecutionException and log about increasing max connection, but of course it might be weird failure mode.
I posted it here as maybe it'll be helpful to someone in similar environment. For now I won't open another issue as it seems to be resolved.
dwatroustrinet Can you provide the output of the logs for the kubeapi pod (kubectl -
n kube-system log <pod name of the 1st kubeapi>