-
Bug
-
Resolution: Unresolved
-
Major
-
None
-
Jenkins 2.73.1 kubernetes-plugin 1.0
If there are two templates with different names but the same labels with a maximum of 1 pod each the scheduling system seems to only allow 1 pod to be spun up instead of 2.
Kubernetes (and thus openshift) does not seem to support dynamic persistent volume claims. Hence all pods from the same template will share the same persistent volume claim. To work around this it is suggested to create a different template (replication controller or deployment config) for each pvc with the replication count being 1. It won't be sufficient to simply use subpathing on a single pvc for all replicated pods due to performance concerns.
I have done this in the config:
template1:
name: maven
labels: mavenlabel
Time in minutes to retain slave when idle: 1 min
max pods: 1
template2:
name: maven2
labels: mavenlabel
Time in minutes to retain slave when idle: 1 min
max pods: 1
I have two jobs:
kube1 restrict to label "mavenlabel"
sleep for 30 seconds
kube2 restrict to label "mavenlabel"
sleep for 30 seconds
After launching kube1 it creates a pod from the maven template and sleeps. Launching kube2 ends up waiting for the pod from kube1 to finish instead of creating another one from the "maven2" template. .
(pending—Waiting for next available executor on maven-2c3qv)
Code may have to be changed to support multiple templates using the same label. Another use case that this could come up is if there are subsets of node labels used. Example label combos:
https://github.com/jenkinsci/kubernetes-plugin/blob/master/src/main/java/org/csanchez/jenkins/plugins/kubernetes/PodTemplateUtils.java#L194