-
Bug
-
Resolution: Unresolved
-
Minor
-
Jenkins 2.345
Kubernetes 1.17.4
Kubernetes plugin 3724.v0920c1e0ec69
In parallel stages, each stage is configured with kubernetes agents that mount the same persistent volume claim in the workspace. But at building, some dynamic agents have been unable to connect to Jenkins?
Jenkinsfile
pipeline { agent { kubernetes { showRawYaml true inheritFrom 'jenkins-slave' workspaceVolume persistentVolumeClaimWorkspaceVolume(claimName: 'c-apimgr-1677810273777', readOnly: false) yaml ''' spec: containers: - name: jnlp image: pipeline/multi_svn_jenkins_agent tty: true resources: requests: cpu: 2000m memory: 2000Mi limits: cpu: 2000m memory: 2000Mi ''' } } options { timeout(time: 240, unit: 'MINUTES') } environment { ****** } stages { stage("source") { stages { stage("scm_plugin-9ef481df-75fa-4b87-98d8-84b2cd2dc6bd") { steps { container("jnlp") { checkout( scm: [ $class: 'SubversionSCM', locations: [[ remote: '****', credentialsId: '***', local: '.']], workspaceUpdater: [$class: 'UpdateByCmdUpdater'], additionalCredentials: [], excludedCommitMessages: '', excludedRegions: """""", excludedRevprop: '', excludedUsers: '', excludedCommitMessages: '', includedRegions: """""", multiCoThreadNum: 1, multiCoDepth: 2 ], poll: false, changelog: false ) } } } } } stage("parallel-1") { parallel { stage("stage-2") { stages { stage("sh-2b153d05-8a8a-4e80-ae4c-97bad084ca3e") { agent { kubernetes { showRawYaml true inheritFrom 'jenkins-slave' workspaceVolume persistentVolumeClaimWorkspaceVolume(claimName: 'c-apimgr-1677810273777', readOnly: false) yaml ''' spec: containers: - name: jnlp image: pipeline/multi_svn_jenkins_agent tty: true resources: requests: cpu: 1000m memory: 2000Mi limits: cpu: 1000m memory: 2000Mi - name: c-sh-2b153d05-8a8a-4e80-ae4c-97bad084ca3 image: library/debian tty: true resources: requests: cpu: 1000m memory: 2000Mi limits: cpu: 1000m memory: 2000Mi ''' } } steps { container("c-sh-2b153d05-8a8a-4e80-ae4c-97bad084ca3") { sh script: """ echo "ok" pwd sleep 60 """, label: "sh-2b153d05-8a8a-4e80-ae4c-97bad084ca3e" } } } } } stage("stage-3") { stages { stage("sh-db5fcaba-f15a-42bb-90d0-5aed3010983a") { agent { kubernetes { showRawYaml true inheritFrom 'jenkins-slave' workspaceVolume persistentVolumeClaimWorkspaceVolume(claimName: 'c-apimgr-1677810273777', readOnly: false) yaml ''' spec: containers: - name: jnlp image: pipeline/multi_svn_jenkins_agent tty: true resources: requests: cpu: 1000m memory: 2000Mi limits: cpu: 1000m memory: 2000Mi - name: c-sh-db5fcaba-f15a-42bb-90d0-5aed3010983 image: library/debian tty: true resources: requests: cpu: 1000m memory: 2000Mi limits: cpu: 1000m memory: 2000Mi ''' } } steps { container("c-sh-db5fcaba-f15a-42bb-90d0-5aed3010983") { sh script: """ echo "ok" ls sleep 60 echo "finished" """, label: "sh-db5fcaba-f15a-42bb-90d0-5aed3010983a" } } } } } stage("stage-5") { stages { stage("sh-fe45b88a-dde4-4574-89cc-b9e60f15d1d5") { agent { kubernetes { showRawYaml true inheritFrom 'jenkins-slave' workspaceVolume persistentVolumeClaimWorkspaceVolume(claimName: 'c-apimgr-1677810273777', readOnly: false) yaml ''' spec: containers: - name: jnlp image: pipeline/multi_svn_jenkins_agent tty: true resources: requests: cpu: 1000m memory: 2000Mi limits: cpu: 1000m memory: 2000Mi - name: c-sh-fe45b88a-dde4-4574-89cc-b9e60f15d1d image: library/debian tty: true resources: requests: cpu: 1000m memory: 2000Mi limits: cpu: 1000m memory: 2000Mi ''' } } steps { container("c-sh-fe45b88a-dde4-4574-89cc-b9e60f15d1d") { sh script: """ echo "ok" ls sleep 60 echo "finished" """, label: "sh-fe45b88a-dde4-4574-89cc-b9e60f15d1d5" } } } } } stage("stage-6") { stages { stage("sh-606a6c7c-351d-4e9b-8acf-c35bd5cd58d8") { agent { kubernetes { showRawYaml true inheritFrom 'jenkins-slave' workspaceVolume persistentVolumeClaimWorkspaceVolume(claimName: 'c-apimgr-1677810273777', readOnly: false) yaml ''' spec: containers: - name: jnlp image: pipeline/multi_svn_jenkins_agent tty: true resources: requests: cpu: 1000m memory: 2000Mi limits: cpu: 1000m memory: 2000Mi - name: c-sh-606a6c7c-351d-4e9b-8acf-c35bd5cd58d image: library/debian tty: true resources: requests: cpu: 1000m memory: 2000Mi limits: cpu: 1000m memory: 2000Mi ''' } } steps { container("c-sh-606a6c7c-351d-4e9b-8acf-c35bd5cd58d") { sh script: """ echo "ok" ls sleep 60 echo "finished" """, label: "sh-606a6c7c-351d-4e9b-8acf-c35bd5cd58d8" } } } } } } } stage("stage-4") { stages { stage("sh-e79e6617-e0b2-4505-bb0b-9638d84f155b") { agent { kubernetes { showRawYaml true inheritFrom 'jenkins-slave' workspaceVolume persistentVolumeClaimWorkspaceVolume(claimName: 'c-apimgr-1677810273777', readOnly: false) yaml ''' spec: containers: - name: jnlp image: pipeline/multi_svn_jenkins_agent tty: true resources: requests: cpu: 1000m memory: 2000Mi limits: cpu: 1000m memory: 2000Mi - name: c-sh-e79e6617-e0b2-4505-bb0b-9638d84f155 image: library/debian tty: true resources: requests: cpu: 1000m memory: 2000Mi limits: cpu: 1000m memory: 2000Mi ''' } } steps { container("c-sh-e79e6617-e0b2-4505-bb0b-9638d84f155") { sh script: """ echo "ok4" sleep 30 echo "finished4" """, label: "sh-e79e6617-e0b2-4505-bb0b-9638d84f155b" } } } } } } }
- As described above,the pipeline and all sub-stages are configured with dynamic kubernetes agents mounted on the same PVC`(persistent volume claim)` in the workspace.
- When the pipeline runs to ` parallel-1`, the kubenetes agents of all the substages it belongs to are launched, but some agents will get stuck. The agent on these stuck stages is suspended: The build-logs for these stages are shown below:
[2023-03-06T08:07:28.018Z] Created Pod: kubernetes dep35-jenkins-agents-testing/apimgr-apimgr-1677810273777-14-g7vhp-5d1ft-5hmmq [2023-03-06T08:07:37.843Z] Still waiting to schedule task [2023-03-06T08:07:37.843Z] ‘apimgr-apimgr-1677810273777-14-g7vhp-5d1ft-5hmmq’ is offline [2023-03-06T08:13:37.972Z] Created Pod: kubernetes dep35-jenkins-agents-testing/apimgr-apimgr-1677810273777-14-g7vhp-5d1ft-mpq55 [2023-03-06T08:20:37.976Z] Created Pod: kubernetes dep35-jenkins-agents-testing/apimgr-apimgr-1677810273777-14-g7vhp-5d1ft-vg3t7
- But check the pod event log for containers that are already started:
- And to check the agent log, the connection to the master has been successful.
Mar 06, 2023 8:27:51 AM hudson.remoting.jnlp.Main createEngine INFO: Setting up agent: apimgr-apimgr-1677810273777-14-g7vhp-5d1ft-vg3t7 Mar 06, 2023 8:27:51 AM hudson.remoting.jnlp.Main$CuiListener <init> INFO: Jenkins agent is running in headless mode. Mar 06, 2023 8:27:51 AM hudson.remoting.Engine startEngine INFO: Using Remoting version: 4.10 Mar 06, 2023 8:27:51 AM org.jenkinsci.remoting.engine.WorkDirManager initializeWorkDir INFO: Using /home/jenkins/agent/remoting as a remoting work directory Mar 06, 2023 8:27:51 AM org.jenkinsci.remoting.engine.WorkDirManager setupLogging INFO: Both error and output logs will be printed to /home/jenkins/agent/remoting Mar 06, 2023 8:27:51 AM hudson.remoting.jnlp.Main$CuiListener status INFO: Locating server among http://****/ Mar 06, 2023 8:27:52 AM org.jenkinsci.remoting.engine.JnlpAgentEndpointResolver resolve INFO: Remoting server accepts the following protocols: [JNLP4-connect, Ping] Mar 06, 2023 8:27:52 AM org.jenkinsci.remoting.engine.JnlpAgentEndpointResolver resolve INFO: Remoting TCP connection tunneling is enabled. Skipping the TCP Agent Listener Port availability check Mar 06, 2023 8:27:52 AM hudson.remoting.jnlp.Main$CuiListener status INFO: Agent discovery successful Agent address: **** Agent port: 50000 Identity: 1d:79:8c:d1:40:08:9f:43:05:cd:fd:f2:8e:81:53:ff Mar 06, 2023 8:27:52 AM hudson.remoting.jnlp.Main$CuiListener status INFO: Handshaking Mar 06, 2023 8:27:52 AM hudson.remoting.jnlp.Main$CuiListener status INFO: Connecting to *****:50000
I want to know why this is and how to solve it?