-
Bug
-
Resolution: Duplicate
-
Minor
-
None
-
Jenkins 2.162
Kubernetes Plugin 1.14.3
Trying to make the kubernetes plugin assign proper volumes to each container in the pod and running into some difficulties:
Building with no template, and passing in the following .yaml to the kubernetes plugin works:
from Jenkinsfile:
agent {
kubernetes {
cloud 'openshift'
label 'golang-build'
yamlFile 'kubernetesPod.yaml'
}
}
kubernetesPod.yaml:
spec:
containers:
- name: jnlp
image: 'jenkins/jnlp-slave:latest'
volumeMounts:
- name: gitconfig
mountPath: /home/jenkins/.git/.gitconfig
subPath: .gitconfig
- name: docker
image: docker:1.13.1
command: ['cat']
tty: true
volumeMounts:
- name: dockersock
mountPath: /var/run/docker.sock
- mountPath: /root/.docker/config.json
subPath: config.json
name: jenkins-creds
- name: golang
image: golang:1-alpine
command: ['cat']
tty: true
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
- configMap:
defaultMode: 420
name: jenkins-dind
name: jenkins-creds
- configMap:
defaultMode: 420
name: gitconfig
name: gitconfig
This spins up the pod with the docker socket correctly mounted to the docker image (for doing docker builds) the .gitconfig configmap properly mapped to the full jnlp-slave image for pulling from our private git repository.
This is obviously a lot of boilerplate that would need to be along with every Jenkinsfile, as they would all want to use the different jnlp-slave image, and docker container
So I tried to create a template, and put most of this configuration into the "Raw yaml for the Pod" section of a template:
apiVersion: v1
kind: Pod
spec:
containers:
- name: jnlp
image: 'jenkins/jnlp-slave:latest'
volumeMounts:
- name: gitconfig
mountPath: /home/jenkins/.git/.gitconfig
subPath: .gitconfig
- name: docker
image: docker:1.13.1
command: ['cat']
tty: true
volumeMounts:
- name: dockersock
mountPath: /var/run/docker.sock
- mountPath: /root/.docker/config.json
subPath: config.json
name: jenkins-creds
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
- configMap:
defaultMode: 420
name: jenkins-dind
name: jenkins-creds
- configMap:
defaultMode: 420
name: gitconfig
name: gitconfig
This let the developers just simply add the relevant component to the their kubernetesPod.yaml
spec:
containers:
- name: golang
image: golang:1-alpine
command: ['cat']
tty: true
Despite setting this as the default template, everything in the "Raw Yaml" section was completely ignored when the pods were created. I was left with a golang container, jnlp-slave:alpine, rather than jnlp-slave:latest as specified. The container had no volume mounts related to the .gitconfig, and no docker container existed anywhere
Here is the API call the kubernetes plugin made to create this pod:
{"kind":"Pod","apiVersion":"v1","metadata":{"name":"build-golang-xld-openshift-6c452-4rkrv","namespace":"devops","selfLink":"/api/v1/namespaces/devops/pods/build-golang-xld-openshift-6c452-4rkrv","uid":"d7094941-2bd4-11e9-ba4b-005056a30653","resourceVersion":"216613302","creationTimestamp":"2019-02-08T19:07:55Z","labels":{"jenkins":"slave","jenkins/build-golang-xld-openshift":"true"},"annotations":{"openshift.io/scc":"restricted"}},"spec":{"volumes":[{"name":"workspace-volume","emptyDir":{}},{"name":"default-token-3aqyr","secret":{"secretName":"default-token-3aqyr","defaultMode":420}}],"containers":[{"name":"golang","image":"golang:latest","command":["cat"],"resources":{},"volumeMounts":[{"name":"workspace-volume","mountPath":"/home/jenkins"},{"name":"default-token-3aqyr","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always","securityContext":{"capabilities":{"drop":["KILL","MKNOD","SYS_CHROOT"]}},"tty":true},{"name":"jnlp","image":"jenkins/jnlp-slave:alpine","env":[{"name":"JENKINS_SECRET","value":"ca764e10df4f265a6d662f9dbd657cc21ba19c79bf3f35a2089920324dd560db"},{"name":"JENKINS_AGENT_NAME","value":"build-golang-xld-openshift-6c452-4rkrv"},{"name":"JENKINS_NAME","value":"build-golang-xld-openshift-6c452-4rkrv"},{"name":"JENKINS_URL","value":"http://cwb02dacoapp02.keybank.com:8080/"},{"name":"HOME","value":"/home/jenkins"}],"resources":{},"volumeMounts":[{"name":"workspace-volume","mountPath":"/home/jenkins"},{"name":"default-token-3aqyr","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{"capabilities":{"drop":["KILL","MKNOD","SYS_CHROOT"]}}}],"restartPolicy":"Never","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","nodeSelector":{"region":"application"},"serviceAccountName":"default","serviceAccount":"default","nodeName":"sdc01dkrapda06x.keybank.com","securityContext":{"seLinuxOptions":{"level":"s0:c6,c5"}},"imagePullSecrets":[{"name":"default-dockercfg-qxdmx"}],"schedulerName":"default-scheduler"},"status":{"phase":"Pending","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2019-02-08T19:07:55Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2019-02-08T19:07:55Z","reason":"ContainersNotReady","message":"containers with unready status: [golang jnlp]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2019-02-08T19:07:55Z"}],"hostIP":"10.24.245.251","startTime":"2019-02-08T19:07:55Z","containerStatuses":[{"name":"golang","state":{"waiting":{"reason":"ContainerCreating"}},"lastState":{},"ready":false,"restartCount":0,"image":"golang:latest","imageID":""},{"name":"jnlp","state":{"waiting":{"reason":"ContainerCreating"}},"lastState":{},"ready":false,"restartCount":0,"image":"jenkins/jnlp-slave:alpine","imageID":""}],"qosClass":"BestEffort"}}
Going back and adding the containers, and volumes to the template via the Jenkins configuration gui, allowed the template to correctly be created with the proper images. However the volumes that were created were mounted to all pods. (jnlp and golang, have no need to connect to /var/run/docker.sock, but volumes added by the gui go to all containers) Additionally the docker container failed to pull in its docker configuration configmap due to /root/.docker/config.json being created as a folder inside the container, not a file, as the volumes created by the GUI have no concept of kubernetes subPaths
- duplicates
-
JENKINS-56082 Merge yaml from parent pod template
-
- Resolved
-