Status: Closed (View Workflow)
Resolution: Not A Defect
Recently we have upgraded Jenkins version from 2.319.1 to 2.361.1 and the Kubernetes plugin version from 1.31.1 to 3704.va_08f0206b_95e
Since this upgrade was done, we have been facing the following error when trying to establish the connection with a JNLP container
This error is happening occasionally and we have not been able to find an identifiable pattern or root cause but it is impacting several production environments.
The comments above are unrelated to this issue. Your problem looks DNS related (cf UnknownHostException in your screenshot). Likely a transient issue with your environment that is unrelated to Jenkins.
|Field||Original Value||New Value|
|Resolution||Not A Defect [ 7 ]|
|Status||Open [ 1 ]||Closed [ 6 ]|
|Attachment||image-2023-01-30-12-40-27-440.png [ 59897 ]|
This problem only occurs after upgrading the Jenkins version to 2.361.1 and the Kubernetes plugin version to 3704.va_08f0206b_95e. Also, we have several Jenkins environments not upgraded yet, that still have the previous versions and the problem with JNLP containers terminating is not happening on them, only on the upgraded ones.
Since the UnknownHostException is appearing in the log trace, we also tried changing the Jenkins service URLs on the Kubernetes cloud configuration to the following structure:
This didn't solved the problem either since the error was still appearing in some cases, apparently randomly, just as before changing the URLs.
Also having similar issues since updating to Jenkins 2.375.2 and Kubernetes Plugin 3802.
Sometimes Pods cannot bestartet because of
But in other cases pods start and work but get this error during test performance. As a result some pods keep with terminated jnlp container und running additional pod-containers on the cluster and block resources.
Those errors are not appearing in a deteministc way.
Already tried pushing timeouts upwards on several config options but it does not have effect to this error occurances.
Obviously current jenkins 2.375 is messing up with the kube configurations. We had those connection issues from controller to kubernetes-agents because of having both a single kube config file in .kube dir (test) and a single cloud configuration in the ui (prod). Having both configurations in place we had ongoing connection issues saying
"Unauthorized! Token may have expired! Please log-in again. Unauthorized 401: must authenticate."
After having removed the test configurartion config file from .kube dir the error disappeared. When reactivating the config again (mv to config.bak and back zu config) the error reappeared. So this hints at a bug where jenkins is messing up with the configs and authentication tokens.