All SSH slaves unexpectedly disconnect when one job finishes

This issue is archived. You can view it, but you can't modify it. Learn more

XMLWordPrintable

      Using SSH based slaves, if you are running two or more agents on a slave, and have concurrent builds running, all of the jobs will unexpectedly fail due to SSH disconnections when one of them finishes. 

       Example of job that was running when another finished. 

      Parameter C_MEMSTYLE bound to: 2 - type: integer 
       Parameter C_OPTIMIZATION bound to: 2 - type: integer 
       Parameter C_MEM_INIT_PREFIX bound to: MainDesign_rs_encoder_0_0 - type: string 
       Parameter C_ELABORATION_DIR bound to: ./ - type: string 
       Parameter C_XDEVICEFAMILY bound to: kintex7 - type: string 
       Parameter C_FAMILY bound to: kintex7 - type: string 
       Connection to 127.0.0.1 closed by remote host.
       [Pipeline] }
       [Pipeline] // script
       [Pipeline] }
       [Pipeline] // withEnv
       [Pipeline] }
       [Pipeline] // stage
       [Pipeline] stage
       [Pipeline] { (Deployment)
       Stage 'Deployment' skipped due to earlier failure(s)
      

      This is persistent and happens regularly.
      I've tried making two slave with one agent each (that point to the same physical slave) but the problem persists.

      This is an issue for us as builds take 3hrs on high powered machines and it's not feasible to run them one after another, we need parallel.

      Jenkins ver. 2.73.3
      ssh slaves plugin 1.24

      Attached screenshot off basic SSH slave setup.

            Assignee:
            Ivan Fernandez Calvo
            Reporter:
            Dion Gonano
            Archiver:
            Jenkins Service Account

              Created:
              Updated:
              Resolved:
              Archived: