Details
-
Bug
-
Status: Closed (View Workflow)
-
Major
-
Resolution: Won't Fix
-
None
-
Jenkins ver. 2.107.3 with kubernetes-plugin
Description
Simplified version of my declarative pipeline looks like this:
pipeline { agent { kubernetes { cloud 'kube-cloud' label 'kube' yaml """ apiVersion: v1 kind: Pod metadata: name: kube spec: containers: - name: jnlp image: jenkins/jnlp-slave """ } } stages { stage('step-in-kubernetes') { steps { sh 'echo inKubernetes' } } stage('step-in-agent') { agent { label 'agentX' } steps { sh 'echo inAgent' } } } }
In this case pipeline fails on stage 'step-in-agent' as following:
Running in Durability level: MAX_SURVIVABILITY [Pipeline] podTemplate [Pipeline] { [Pipeline] node Still waiting to schedule task All nodes of label ‘kube’ are offline Agent kube-3cqmb-dpw7t is provisioned from template Kubernetes Pod Template Agent specification [Kubernetes Pod Template] (kube): Running on kube-3cqmb-dpw7t in /home/jenkins/workspace/pipeline-tests [Pipeline] { [Pipeline] container [Pipeline] { [Pipeline] stage [Pipeline] { (step-in-kubernetes) [Pipeline] sh [pipeline-tests] Running shell script + echo inKubernetes inKubernetes [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (step-in-agent) [Pipeline] node Running on Jenkins in /var/lib/jenkins/jobs/pipeline-tests/workspace [Pipeline] { [Pipeline] sh [workspace] Running shell script /bin/sh: 1: cd: can't cd to /var/lib/jenkins/jobs/pipeline-tests/workspace sh: 1: cannot create /var/lib/jenkins/jobs/pipeline-tests/workspace@tmp/durable-0bd20381/jenkins-log.txt: Directory nonexistent sh: 1: cannot create /var/lib/jenkins/jobs/pipeline-tests/workspace@tmp/durable-0bd20381/jenkins-result.txt.tmp: Directory nonexistent mv: cannot stat '/var/lib/jenkins/jobs/pipeline-tests/workspace@tmp/durable-0bd20381/jenkins-result.txt.tmp': No such file or directory EXITCODE 0process apparently never started in /var/lib/jenkins/jobs/pipeline-tests/workspace@tmp/durable-0bd20381 [Pipeline] } [Pipeline] // node [Pipeline] } [Pipeline] // stage [Pipeline] } [Pipeline] // container [Pipeline] } [Pipeline] // node [Pipeline] } [Pipeline] // podTemplate [Pipeline] End of Pipeline ERROR: script returned exit code -2 Finished: FAILURE
In my real pipeline, I must run a few steps in parallel on kubernetes containers, and after this I must run some steps on other agent which cannot be in kubernetes container. I know that it is possible to do something like this in scripted pipeline, but it is possible to achieve this in declarative pipeline ?
I don't think it can be fixed by the plugin. It is just a syntax limitation which can't be overcome.