-
Bug
-
Resolution: Fixed
-
Minor
-
None
We have an issue with the labels assigned to jenkins slave pods within the jenkins kubernetes plugin. A label in a Kubernetes pod are key/value pairs. Valid label keys have two segments: an optional prefix and name, separated by a slash . The issue we are seeing is that the keys are unique for each pod that is spinned up. Following the guidelines here: https://github.com/jenkinsci/kubernetes-plugin means that we end up with a pod label like: `jenkins/worker-a3346222-237b-4882-a8f9-2d1f125e2a44=true`. so instead of putting the unique id for the worker as the value, it is part of the key. This becomes a problem because we get one index per build in elasticsearch. Has anybody thought about this or know a workaround? It seems unnecessary that we have to write a logstash filter to ignore fields starting with `jenkins/`
- duplicates
-
JENKINS-60088 Change pod label - build label from unique key to value.
-
- Resolved
-
- is related to
-
JENKINS-60088 Change pod label - build label from unique key to value.
-
- Resolved
-
I too had to work through this in Dec 2018 and worked with Eric in the slack channel discussing it. I couldn't get it to work with logstash and filebeats, in any combination. Instead in the Kube stack that runs our Jenkins builders, I had to install and configure fluentd and use inline ruby scripting to morph the fields. I had to capture everything after the `jenkins/(.+)`, create a new key "jenkins_build=$1". What I had to do to accomplish it is documented here:
https://gist.github.com/mrballcb/c1a8ff4132224e654e85aad80f3a0fec