-
Bug
-
Resolution: Cannot Reproduce
-
Major
-
None
Using SSH based slaves, if you are running two or more agents on a slave, and have concurrent builds running, all of the jobs will unexpectedly fail due to SSH disconnections when one of them finishes.
Example of job that was running when another finished.
Parameter C_MEMSTYLE bound to: 2 - type: integer Parameter C_OPTIMIZATION bound to: 2 - type: integer Parameter C_MEM_INIT_PREFIX bound to: MainDesign_rs_encoder_0_0 - type: string Parameter C_ELABORATION_DIR bound to: ./ - type: string Parameter C_XDEVICEFAMILY bound to: kintex7 - type: string Parameter C_FAMILY bound to: kintex7 - type: string Connection to 127.0.0.1 closed by remote host. [Pipeline] } [Pipeline] // script [Pipeline] } [Pipeline] // withEnv [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (Deployment) Stage 'Deployment' skipped due to earlier failure(s)
This is persistent and happens regularly.
I've tried making two slave with one agent each (that point to the same physical slave) but the problem persists.
This is an issue for us as builds take 3hrs on high powered machines and it's not feasible to run them one after another, we need parallel.
Jenkins ver. 2.73.3
ssh slaves plugin 1.24
Attached screenshot off basic SSH slave setup.
- is related to
-
JENKINS-49118 SSH Slaves 1.25 Breaks
-
- Resolved
-
-
JENKINS-48829 Linux slaves disconnect intermittently
-
- Closed
-
The loopback device, Could you attach the config file of this Agent? Is it running on the same Jenkins instance? Did you set the Xmx and Xms JVM parameters?