-
Bug
-
Resolution: Unresolved
-
Major
-
None
-
Jenkins 2.414.1
kubernetes plugin 4029.v5712230ccb_f8
After latest upgrade of jenkins itself (latest lts from previous version of lts) and kubernetes plugin I noticed that it now creates 2 pods per job run.
It started happening just recently but I cannot tell what component upgrade exactly has caused this.
I create a pod using a single `podTemplate` function.
This is how jenkins logs look in the UI:
00:00:07.660 Created Pod: kubernetes adm-prod-jenkins-agents/jenkins-slave-d2e6bd35-409a-48e5-8b13-bf255719d5c2-zxfx1-7ws76 00:00:21.526 Still waiting to schedule task 00:00:21.528 Waiting for next available executor on ‘jenkins-slave-d2e6bd35-409a-48e5-8b13-bf255719d5c2-zxfx1-7ws76’ 00:00:38.214 Agent jenkins-slave-d2e6bd35-409a-48e5-8b13-bf255719d5c2-zxfx1-7ws76 is provisioned from template jenkins-slave-d2e6bd35-409a-48e5-8b13-bf255719d5c2-zxfx1 00:00:38.237 --- 00:00:38.237 apiVersion: "v1" 00:00:38.237 kind: "Pod" 00:00:38.237 metadata: 00:00:38.237 <redacted> 00:00:38.238 00:00:38.347 Created Pod: kubernetes adm-prod-jenkins-agents/jenkins-slave-d2e6bd35-409a-48e5-8b13-bf255719d5c2-zxfx1-dr2nw 00:00:38.679 Running on jenkins-slave-d2e6bd35-409a-48e5-8b13-bf255719d5c2-zxfx1-7ws76 in /home/jenkins/agent/workspace/apps--certly-client_v2
See that the prefix is the same and only last 5 characters differ.
And this is how stdout of jenkins server looks
2023-09-05 03:43:52.304+0000 [id=416] INFO hudson.slaves.NodeProvisioner#update: jenkins-slave-d2e6bd35-409a-48e5-8b13-bf255719d5c2-zxfx1-7ws76 provisioning successfully completed. We have now 2 computer(s) 2023-09-05 03:43:52.383+0000 [id=418] INFO o.c.j.p.k.KubernetesLauncher#launch: Created Pod: kubernetes adm-prod-jenkins-agents/jenkins-slave-d2e6bd35-409a-48e5-8b13-bf255719d5c2-zxfx1-7ws76 2023-09-05 03:43:57.188+0000 [id=974] INFO h.TcpSlaveAgentListener$ConnectionHandler#run: Accepted JNLP4-connect connection #7 from /10.51.1.43:58702 2023-09-05 03:44:22.869+0000 [id=418] INFO o.c.j.p.k.KubernetesLauncher#launch: Pod is running: kubernetes adm-prod-jenkins-agents/jenkins-slave-d2e6bd35-409a-48e5-8b13-bf255719d5c2-zxfx1-7ws76 2023-09-05 03:44:23.007+0000 [id=47] INFO hudson.slaves.NodeProvisioner#update: jenkins-slave-d2e6bd35-409a-48e5-8b13-bf255719d5c2-zxfx1-dr2nw provisioning successfully completed. We have now 3 computer(s) 2023-09-05 03:44:23.070+0000 [id=418] INFO o.c.j.p.k.KubernetesLauncher#launch: Created Pod: kubernetes adm-prod-jenkins-agents/jenkins-slave-d2e6bd35-409a-48e5-8b13-bf255719d5c2-zxfx1-dr2nw 2023-09-05 03:44:27.955+0000 [id=1029] INFO h.TcpSlaveAgentListener$ConnectionHandler#run: Accepted JNLP4-connect connection #8 from /10.51.1.46:42336 ... 2023-09-05 03:47:02.719+0000 [id=976] INFO j.s.DefaultJnlpSlaveReceiver#channelClosed: Computer.threadPoolForRemoting [#23] for jenkins-slave-d2e6bd35-409a-48e5-8b13-bf255719d5c2-zxfx1-7ws76 terminated: java.nio.channels.ClosedChannelException 2023-09-05 03:50:42.663+0000 [id=1549] INFO o.c.j.p.k.KubernetesSlave#_terminate: Terminating Kubernetes instance for agent jenkins-slave-d2e6bd35-409a-48e5-8b13-bf255719d5c2-zxfx1-dr2nw 2023-09-05 03:50:42.886+0000 [id=1549] INFO o.c.j.p.k.KubernetesSlave#deleteSlavePod: Terminated Kubernetes instance for agent adm-prod-jenkins-agents/jenkins-slave-d2e6bd35-409a-48e5-8b13-bf255719d5c2-zxfx1-dr2nw 2023-09-05 03:50:42.888+0000 [id=1549] INFO o.c.j.p.k.KubernetesSlave#_terminate: Disconnected computer jenkins-slave-d2e6bd35-409a-48e5-8b13-bf255719d5c2-zxfx1-dr2nw 2023-09-05 03:50:42.889+0000 [id=1549] INFO j.s.DefaultJnlpSlaveReceiver#channelClosed: Computer.threadPoolForRemoting [#33] for jenkins-slave-d2e6bd35-409a-48e5-8b13-bf255719d5c2-zxfx1-dr2nw terminated: java.nio.channels.ClosedChannelException
So one of the pods exits immediately right after the job has completed, and the other stays there til the job timeout.