-
Type:
Bug
-
Resolution: Unresolved
-
Priority:
Minor
-
Component/s: kubernetes-plugin
Hi team,
Â
We found an issue in the kubernetes plugin. The field nodeSelector in the kubernetes cloud doesn't work as expected. We will try to keep builds isolated in different nodes with a Cluster Autoscaler.
Piece of yaml:
```
     image: jenkins/inbound-agent:4.3-4
    imagePullPolicy: IfNotPresent
   name: jnlp
   resources:
    limits:
     cpu: 200m
     memory: 300Mi
    requests:
     cpu: 100m
     memory: 256Mi
   terminationMessagePath: /dev/termination-log
   terminationMessagePolicy: File
   volumeMounts:
   - mountPath: /home/jenkins/agent
    name: workspace-volume
   - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
    name: xxxxxx
    readOnly: true
   - mountPath: /var/run/xxxx
    name: xxxx
    readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  nodeName: xxxxx
  nodeSelector:
   kubernetes.io/os: linux
  priority: 0
  restartPolicy: Never
  schedulerName: default-scheduler
  securityContext:
   fsGroup: 1000
  serviceAccount: xxxx
  serviceAccountName: xxxx
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
   key: node.kubernetes.io/not-ready
   operator: Exists
   tolerationSeconds: 300
  - effect: NoExecute
   key: node.kubernetes.io/unreachable
   operator: Exists
   tolerationSeconds: 300
```
Â
As you can see in the attached image we have configured this in our kubernetes cloud (this is our default cloud) inside jenkins configuration, but when we launch a new job, it's not built as expected. We need to configure each yaml from each pipeline with the selector because this option is not worked properly.
Â
Â
Versions:
Jenkins version: 2.263.4
Kubernetes version: 1.18.9
Kubernetes plugin version: 1.29.2
Â
Thank you in advance.