-
New Feature
-
Resolution: Unresolved
-
Major
As of JENKINS-28689 there is a Workflow step binding the agent. This survives Jenkins restarts in the common case:
node { sshagent('...') { sh 'ssh user@host command' // restart Jenkins after connection made } }
or
node { sshagent('...') { sleep 999 // ← restart Jenkins here sh 'ssh user@host command' } }
but in this case
node {
sshagent('...') {
sh '''
sleep 999 # ← restart Jenkins here
ssh ...
'''
}
}
the shell script will be launched with one $SSH_AUTH_SOCK; then Jenkins will be restarted, killing the agent server; then after restart a new server will be started with a new socket address, defining a new $SSH_AUTH_SOCK for subsequent forked processes, yet the existing scripts continues to run and when ssh is launched it will fail to connect to the old server and die.
The solution for this problem would be to reuse a socket address across restarts.
Another even less common case would be
node {
sshagent('...') {
sh '''
sleep 999
# ← restart Jenkins here
ssh ...
'''
}
}
where the request to use the private key happens to come while Jenkins is restarting. That can only be solved by forking an external process for the agent server so that it survives the loss of the slave agent.
A related issue is that the current implementation will probably not survive a disconnection and reconnection of the slave agent with the Jenkins master still running, since it relies on onResume and lacks a ComputerListener. The forked agent approach would of course address that as well.
- depends on
-
JENKINS-36997 sshAgent {} inside docker.image().inside {} does not work with long project name
- Resolved
- is blocking
-
JENKINS-28689 Make SSH Agent Plugin compatible with Workflow
- Resolved