Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-51700

Cannot run command on other agent/master when global agent is kubernetes

    XMLWordPrintable

    Details

    • Similar Issues:

      Description

      Simplified version of my declarative pipeline looks like this:

      pipeline {
        agent {
          kubernetes {
            cloud 'kube-cloud'
            label 'kube'
            yaml """
      apiVersion: v1
      kind: Pod
      metadata:
        name: kube
      spec:
        containers:
        - name: jnlp
          image: jenkins/jnlp-slave
      """
          }
        }
      
        stages {
          stage('step-in-kubernetes') {
            steps {
              sh 'echo inKubernetes'
            }
          }
      
          stage('step-in-agent') {
            agent {
              label 'agentX'
            }
            steps {
              sh 'echo inAgent'
            }
          }
        }
      }
      

      In this case pipeline fails on stage 'step-in-agent' as following:

      Running in Durability level: MAX_SURVIVABILITY
      [Pipeline] podTemplate
      [Pipeline] {
      [Pipeline] node
      Still waiting to schedule task
      All nodes of label ‘kube’ are offline
      Agent kube-3cqmb-dpw7t is provisioned from template Kubernetes Pod Template
      Agent specification [Kubernetes Pod Template] (kube):
      
      Running on kube-3cqmb-dpw7t in /home/jenkins/workspace/pipeline-tests
      [Pipeline] {
      [Pipeline] container
      [Pipeline] {
      [Pipeline] stage
      [Pipeline] { (step-in-kubernetes)
      [Pipeline] sh
      [pipeline-tests] Running shell script
      + echo inKubernetes
      inKubernetes
      [Pipeline] }
      [Pipeline] // stage
      [Pipeline] stage
      [Pipeline] { (step-in-agent)
      [Pipeline] node
      Running on Jenkins in /var/lib/jenkins/jobs/pipeline-tests/workspace
      [Pipeline] {
      [Pipeline] sh
      [workspace] Running shell script
      /bin/sh: 1: cd: can't cd to /var/lib/jenkins/jobs/pipeline-tests/workspace
      sh: 1: cannot create /var/lib/jenkins/jobs/pipeline-tests/workspace@tmp/durable-0bd20381/jenkins-log.txt: Directory nonexistent
      sh: 1: cannot create /var/lib/jenkins/jobs/pipeline-tests/workspace@tmp/durable-0bd20381/jenkins-result.txt.tmp: Directory nonexistent
      mv: cannot stat '/var/lib/jenkins/jobs/pipeline-tests/workspace@tmp/durable-0bd20381/jenkins-result.txt.tmp': No such file or directory
      EXITCODE   0process apparently never started in /var/lib/jenkins/jobs/pipeline-tests/workspace@tmp/durable-0bd20381
      [Pipeline] }
      [Pipeline] // node
      [Pipeline] }
      [Pipeline] // stage
      [Pipeline] }
      [Pipeline] // container
      [Pipeline] }
      [Pipeline] // node
      [Pipeline] }
      [Pipeline] // podTemplate
      [Pipeline] End of Pipeline
      ERROR: script returned exit code -2
      Finished: FAILURE

      In my real pipeline, I must run a few steps in parallel on kubernetes containers, and after this I must run some steps on other agent which cannot be in kubernetes container. I know that it is possible to do something like this in scripted pipeline, but it is possible to achieve this in declarative pipeline ?

        Attachments

          Activity

          Hide
          abayer Andrew Bayer added a comment -

          I think this is a combination of a limitation in the kubernetes plugin (you can't run a node block inside a container block) and a limitation in Declarative (per-stage agents run within the top-level agent block). You may want to investigate sequential stages (https://jenkins.io/blog/2018/07/02/whats-new-declarative-piepline-13x-sequential-stages/) - you could have a top level agent none, then a parent stage with the kubernetes agent and a stages block of all the stages that need to run on that kubernetes agent, and finally, outside of that parent stage, the stage that needs to run on a different agent.

          Show
          abayer Andrew Bayer added a comment - I think this is a combination of a limitation in the kubernetes plugin (you can't run a node block inside a container block) and a limitation in Declarative (per-stage agents run within the top-level agent block). You may want to investigate sequential stages ( https://jenkins.io/blog/2018/07/02/whats-new-declarative-piepline-13x-sequential-stages/ ) - you could have a top level agent none , then a parent stage with the kubernetes agent and a stages block of all the stages that need to run on that kubernetes agent, and finally, outside of that parent stage , the stage that needs to run on a different agent.
          Hide
          c9s Yo-An Lin added a comment -

          same here. I wanted to have a default k8s agent, and in the sub-stage I want to allocate a node for isolating the tests. However, it shows "can not cd into {workspace}" error.

          Show
          c9s Yo-An Lin added a comment - same here. I wanted to have a default k8s agent, and in the sub-stage I want to allocate a node for isolating the tests. However, it shows "can not cd into {workspace}" error.
          Hide
          stevenschlansker Steven Schlansker added a comment -

          We are also suffering this issue on jenkins 2.176.2 with kubernetes plugin 1.18.1

          Show
          stevenschlansker Steven Schlansker added a comment - We are also suffering this issue on jenkins 2.176.2 with kubernetes plugin 1.18.1
          Hide
          vlatombe Vincent Latombe added a comment -

          I don't think it can be fixed by the plugin. It is just a syntax limitation which can't be overcome.

          Show
          vlatombe Vincent Latombe added a comment - I don't think it can be fixed by the plugin. It is just a syntax limitation which can't be overcome.

            People

            Assignee:
            Unassigned Unassigned
            Reporter:
            mjanczuk Mateusz Janczuk
            Votes:
            9 Vote for this issue
            Watchers:
            12 Start watching this issue

              Dates

              Created:
              Updated:
              Resolved: