-
Improvement
-
Resolution: Unresolved
-
Major
Issue:
I have a pipeline which is massively parallel and spins up a series of kubernetes pods to do a build process with a few containers on each. Occasionally(about 1 of every 5-10) the kubernetes pod will disconnect with an error message
Could not connect to $CONTAINER to send interrupt signal to process
This becomes very difficult to troubleshoot a solution for since there is not any additional details
Resoution:
It would be very helpful here to get logs or more details about the failure by potentially dumping the failed pod logs somewhere. This would be very helpful to have inside of the Kubernetes plugin since the pod no longer exists after the build is done and getting the previous logs is difficult if it is not caught instantly
- links to
[JENKINS-56992] When a pod dies in kubernetes, we should be able to dump out the pod logs somewhere
Labels | New: diagnostics |
Assignee | Original: Carlos Sanchez [ csanchez ] |
Remote Link | New: This issue links to "related CloudBees-internal issue (Web Link)" [ 23250 ] |
I have a similar problem (running jenkins builds with openshift) when pods occasionally fail to start and disappear without any trace of logs kept anywhere. It would be great of logs or events from pod and container startup failures could show up in the build log on jenkins.