Status: Resolved (View Workflow)
EC2 plugin v1.49+
Jenkins master runs on an AWS Linux 2. Jenkins uses the EC2 plugin to create slaves whenever needed and many jobs are assigned to slaves using the labels.
Since upgrading to EC2 plugin 1.49 (and to Jenkins 2.217 which contains remoting 4.0) some jobs - randomly, it seems - are started on the master node instead of using the started slaves. The aws slave is started, but the workspace is created on master (in the user's home which should have been used on the slave). The job's console log says it is running on the slave but it is not true.
Maybe this is not related to EC2 plugin as I don't see any change related to this problem in the 1.49 version's release history.
Attachment: I created a snapshot about a node's script console page while - according to the Jenkins logs - it was used for building. I asked for the hostname and although the name of the node suggests it is a slave node, the hostname belongs to the master. And of course the workspace was created on master.
Just saw https://github.com/jenkinsci/ec2-plugin/pull/447, which seems likely to fix this issue; one of the comments actually refers to the
Connecting to null on port 22
pattern I described in an earlier comment.
Unfortunately I saw this again on Jenkins 2.346.3 with ec2 v1.68
Symptoms are the same: "Connecting to null on port 22", then starting the job on the controller node.
I wonder if adding a null check to https://github.com/jenkinsci/ec2-plugin/blob/master/src/main/java/hudson/plugins/ec2/ssh/EC2UnixLauncher.java#L430 (in addition to checking for "0.0.0.0" as the IP address) would be enough to force the controller to wait for the worker to come up and initialize fully.
jthompson , I agree that this is most likely a problem with the ec2 plugin, not remoting, as this log is emitted still in the worker node's startup phase, before the job is handed to the worker. Could maybe thoulen, raihaan or julienduchesne offer some insight?
My apologies if the direct ping goes against the etiquette; I tried, but could not find an ec2-maintainers alias.
raihaan , thanks a lot for the quick fix and release, it was a very pleasant surprise indeed.
I'll keep an eye on the scenario; hopefully this fix makes it go away for good.
raihaan, yes, they do use the same keys, and I've realized that assigning different keys to them would be a useful workaround.
However, I've never had this problem before upgrading to 1.49.1, so having the same keys does not caue the problem, although it makes the failing case that much more severe.