Also having similar issues since updating to Jenkins 2.375.2 and Kubernetes Plugin 3802.
Sometimes Pods cannot bestartet because of
ERROR: Unable to create pod kubernetes-prod xxxxxxx/xxxxxxxxxx-0z8vv-b8ws1-sq09l.
Failure executing: POST at: https:
But in other cases pods start and work but get this error during test performance. As a result some pods keep with terminated jnlp container und running additional pod-containers on the cluster and block resources.
Those errors are not appearing in a deteministc way.
Already tried pushing timeouts upwards on several config options but it does not have effect to this error occurances.
UPDATE
Obviously current jenkins 2.375 is messing up with the kube configurations. We had those connection issues from controller to kubernetes-agents because of having both a single kube config file in .kube dir (test) and a single cloud configuration in the ui (prod). Having both configurations in place we had ongoing connection issues saying
"Unauthorized! Token may have expired! Please log-in again. Unauthorized 401: must authenticate."
After having removed the test configurartion config file from .kube dir the error disappeared. When reactivating the config again (mv to config.bak and back zu config) the error reappeared. So this hints at a bug where jenkins is messing up with the configs and authentication tokens.
Also having similar issues since updating to Jenkins 2.375.2 and Kubernetes Plugin 3802.
Sometimes Pods cannot bestartet because of
ERROR: Unable to create pod kubernetes-prod xxxxxxx/xxxxxxxxxx-0z8vv-b8ws1-sq09l. Failure executing: POST at: https://xxxxxxxxxre.com/k8s/clusters/c-m-smkkb8c5/api/v1/namespaces/xxxxxxxx/pods. Message: Unauthorized! Token may have expired! Please log-in again. Unauthorized 401: must authenticate.
But in other cases pods start and work but get this error during test performance. As a result some pods keep with terminated jnlp container und running additional pod-containers on the cluster and block resources.
Those errors are not appearing in a deteministc way.
Already tried pushing timeouts upwards on several config options but it does not have effect to this error occurances.
UPDATE
Obviously current jenkins 2.375 is messing up with the kube configurations. We had those connection issues from controller to kubernetes-agents because of having both a single kube config file in .kube dir (test) and a single cloud configuration in the ui (prod). Having both configurations in place we had ongoing connection issues saying
"Unauthorized! Token may have expired! Please log-in again. Unauthorized 401: must authenticate."
After having removed the test configurartion config file from .kube dir the error disappeared. When reactivating the config again (mv to config.bak and back zu config) the error reappeared. So this hints at a bug where jenkins is messing up with the configs and authentication tokens.