Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-56673

Better handling of ChannelClosedException in Declarative pipeline


    • Icon: Improvement Improvement
    • Resolution: Duplicate
    • Icon: Minor Minor
    • kubernetes-plugin
    • None
    • Jenkins: 2.150.2, k8s plugin version: 1.14.3

      When pods get deleted for any reason,  there is a log/exception like so:

      hudson.remoting.ChannelClosedException: Channel "unknown": Remote call on JNLP4-connect connection from .... failed. The channel is closing down or has closed down 

      The job then appears to hang indefinitely until a timeout is reached or it's stopped manually.

      In our use case (k8s using preemptible vms) we actually expect pods to be deleted mid build and want to be able to handle pod deletion with a retry.

      I have not been able to find a way to handle this in declarative syntax.

      For testing, using a very simple declarative example:

          stages {
              stage('Try test') {
                  steps {
                      container('jnlp') {
                          sh """
                          echo Kill the pod now
                          sleep 5m
                  post {
                      failure {
                          echo "Failuuure"

      But the exception does not actually trigger the failure block when the pod is killed.

      Is there currently any best practice to handle the deletion of a pod? Are there any timeout parameters that would be useful in this case?

      I'm happy to add a PR to the Readme after learning

            Unassigned Unassigned
            cfebs Collin Lefeber
            1 Vote for this issue
            4 Start watching this issue