Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-55096

can not override jnlp container

    XMLWordPrintable

Details

    Description

      When I try to override the jnlp container image I still always get the alpine image and do not get the specified image. It seems that event hough i specify a container with the name jnlp it just ignores it and uses the default alpine image.

       

      agent {
        kubernetes {
          label "human-review-ui-pipeline-${env.BUILD_ID}"
          defaultContainer 'jnlp'
          yaml """
      apiVersion: v1
      kind: Pod
      metadata:
      {{ labels:}}
      {{ pod-template: jenkins-slave-npm}}
      spec:
      {{ containers:}}
      {{ - name: jnlp}}
        image: "redhat-cop/jenkins-slave-image-mgmt"

      """

      Attachments

        Issue Links

          Activity

            yrsurya suryatej yaramada added a comment - - edited

            I tried with custom image didn't work so given jenkins-jnlp official image even didn't work

             

            using this on scripted pipeline

            node('eks-cluster') {
            stage('Check jnlp') {
            sh 'rm -rf *'

            yrsurya suryatej yaramada added a comment - - edited I tried with custom image didn't work so given jenkins-jnlp official image even didn't work   using this on scripted pipeline node('eks-cluster') { stage('Check jnlp') { sh 'rm -rf *'

            I also have similar issue. Irrespective of yaml or ui override, the plugin is always pulling jnlp:alpine even though i have my custom jnlp named as jnlp

            Jenkins: 2.150.3 and Kubernetes plugin: 1.14.3

             

            Can we please reopen this ticket or explain what configuration will make my custom jnlp work?

            sushantp Sushant Pradhan added a comment - I also have similar issue. Irrespective of yaml or ui override, the plugin is always pulling jnlp:alpine even though i have my custom jnlp named as jnlp Jenkins: 2.150.3 and Kubernetes plugin: 1.14.3   Can we please reopen this ticket or explain what configuration will make my custom jnlp work?

            yrsurya your pod is failing, you need to check why
            Agent is not connected after 30 seconds

            so it is probably using an old agent or another one you have with the same label

            csanchez Carlos Sanchez added a comment - yrsurya your pod is failing, you need to check why Agent is not connected after 30 seconds so it is probably using an old agent or another one you have with the same label
            itewk Ian Tewksbury added a comment -

            csanchez I am still having this issue. I don't have any pod templates defined in the `Kubernetes` configuration. I only define the pod / containers in the declarative pipelines. So there is no merging going on. But for some reason no matter what i put in for `image` for the `jnlp` container in the declarative pipeline, it gets ignored and the default alipine container gets used.

            itewk Ian Tewksbury added a comment - csanchez I am still having this issue. I don't have any pod templates defined in the `Kubernetes` configuration. I only define the pod / containers in the declarative pipelines. So there is no merging going on. But for some reason no matter what i put in for `image` for the `jnlp` container in the declarative pipeline, it gets ignored and the default alipine container gets used.
            silkgoat Robert Horvath added a comment - - edited

            Same issue here with  a declarative pipline.

            Jenkins version: 2.190.1

            Kubernetes plugin: 1.20.1

            I've created a pod template under Cloud section on the ui. With the following options:

            Pod Template::
            Name: jenkins-builder
            ...  
              Container Template ::
                Name: my-jnlp
                Docker image: jenkins/jnlp-slave:latest
                Working directory: /home/jenkins/agent
                ...
            ...
            Workspace Volume: PVC
            Claim name: jenkins-slave-claim

            Then created this basic pipline:

            pipeline {
              agent {
                kubernetes {
                  defaultContainer 'my-jnlp'
                  yaml """
            apiVersion: v1
            kind: Pod
            metadata:
              name: jenkins-builder
            spec:
              containers:
              - name: busybox
                image: busybox
                command: 
                - cat
                tty: true
            """
                 }
              }
            
              stages {
                 stage('start') {
                    steps{
                        container('busybox'){
                            sh "ls"
                         }
                     }
                 }
              }
            }
            

            In the console I always get the default jnlp container:

            apiVersion: "v1"
            kind: "Pod"
            metadata:
              annotations:
                buildUrl: "http://jenkins.default.svc.k8s.si.net:8080/job/test/20/"
              labels:
                jenkins: "slave"
                jenkins/test_20-s37fr: "true"
              name: "test-20-s37fr-xq14x-z8rq5"
            spec:
              containers:
              - command:
                - "cat"
                image: "busybox"
                name: "busybox"
                tty: true
                volumeMounts:
                - mountPath: "/home/jenkins/agent"
                  name: "workspace-volume"
                  readOnly: false
              - command:
                - "cat"
                image: "maven:3-alpine"
                name: "builder-new"
                tty: true
                volumeMounts:
                - mountPath: "/home/jenkins/agent"
                  name: "workspace-volume"
                  readOnly: false
              - env:
                - name: "JENKINS_SECRET"
                  value: "********"
                - name: "JENKINS_TUNNEL"
                  value: "jenkins-agent.default.svc.k8s.si.net:50000"
                - name: "JENKINS_AGENT_NAME"
                  value: "test-20-s37fr-xq14x-z8rq5"
                - name: "JENKINS_NAME"
                  value: "test-20-s37fr-xq14x-z8rq5"
                - name: "JENKINS_AGENT_WORKDIR"
                  value: "/home/jenkins/agent"
                - name: "JENKINS_URL"
                  value: "http://jenkins.default.svc.k8s.si.net:8080/"
                image: "jenkins/jnlp-slave:alpine"
                name: "jnlp"
                volumeMounts:
                - mountPath: "/home/jenkins/agent"
                  name: "workspace-volume"
                  readOnly: false
              nodeSelector: {}
              restartPolicy: "Never"
              volumes:
              - emptyDir:
                  medium: ""
                name: "workspace-volume"
            

            So it's not what I'd like to see. I cannot find out how I can use my Pod with the PVC.

            I cannot workaround this with JENKINS-56375 unfortunately.

            silkgoat Robert Horvath added a comment - - edited Same issue here with  a declarative pipline. Jenkins version: 2.190.1 Kubernetes plugin: 1.20.1 I've created a pod template under Cloud section on the ui. With the following options: Pod Template:: Name: jenkins-builder ... Container Template :: Name: my-jnlp Docker image: jenkins/jnlp-slave:latest Working directory: /home/jenkins/agent ... ... Workspace Volume: PVC Claim name: jenkins-slave-claim Then created this basic pipline: pipeline { agent { kubernetes { defaultContainer 'my-jnlp' yaml """ apiVersion: v1 kind: Pod metadata: name: jenkins-builder spec: containers: - name: busybox image: busybox command: - cat tty: true """ } } stages { stage( 'start' ) { steps{ container( 'busybox' ){ sh "ls" } } } } } In the console I always get the default jnlp container: apiVersion: "v1" kind: "Pod" metadata: annotations: buildUrl: "http: //jenkins. default .svc.k8s.si.net:8080/job/test/20/" labels: jenkins: "slave" jenkins/test_20-s37fr: " true " name: "test-20-s37fr-xq14x-z8rq5" spec: containers: - command: - "cat" image: "busybox" name: "busybox" tty: true volumeMounts: - mountPath: "/home/jenkins/agent" name: "workspace-volume" readOnly: false - command: - "cat" image: "maven:3-alpine" name: "builder- new " tty: true volumeMounts: - mountPath: "/home/jenkins/agent" name: "workspace-volume" readOnly: false - env: - name: "JENKINS_SECRET" value: "********" - name: "JENKINS_TUNNEL" value: "jenkins-agent. default .svc.k8s.si.net:50000" - name: "JENKINS_AGENT_NAME" value: "test-20-s37fr-xq14x-z8rq5" - name: "JENKINS_NAME" value: "test-20-s37fr-xq14x-z8rq5" - name: "JENKINS_AGENT_WORKDIR" value: "/home/jenkins/agent" - name: "JENKINS_URL" value: "http: //jenkins. default .svc.k8s.si.net:8080/" image: "jenkins/jnlp-slave:alpine" name: "jnlp" volumeMounts: - mountPath: "/home/jenkins/agent" name: "workspace-volume" readOnly: false nodeSelector: {} restartPolicy: "Never" volumes: - emptyDir: medium: "" name: "workspace-volume" So it's not what I'd like to see. I cannot find out how I can use my Pod with the PVC. I cannot workaround this with JENKINS-56375 unfortunately.

            People

              csanchez Carlos Sanchez
              itewk Ian Tewksbury
              Votes:
              1 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: