Jenkins master runs on an AWS Linux 2. Jenkins uses the EC2 plugin to create slaves whenever needed and many jobs are assigned to slaves using the labels.
Since upgrading to EC2 plugin 1.49 (and to Jenkins 2.217 which contains remoting 4.0) some jobs - randomly, it seems - are started on the master node instead of using the started slaves. The aws slave is started, but the workspace is created on master (in the user's home which should have been used on the slave). The job's console log says it is running on the slave but it is not true.
Maybe this is not related to EC2 plugin as I don't see any change related to this problem in the 1.49 version's release history.
Attachment: I created a snapshot about a node's script console page while - according to the Jenkins logs - it was used for building. I asked for the hostname and although the name of the node suggests it is a slave node, the hostname belongs to the master. And of course the workspace was created on master.
I have run into a similar problem with Jenkins v2.204.2 and EC2 plugin v1.49.1. In our case the master was actually overloaded by the misdirected job, and the Jenkins process was killed by the OOM-killer.
One symptom I found was that the Jenkins log lines that normally log the connection attempt from the EC2 plugin to the newly created worker missed the IP address, printing "null" instead:
Regular log entry:
Bad log line (only 2 instances in several weeks, immediately before the failure):
2020-03-10 04:47:57.113+0000 [id=797326] INFO hudson.plugins.ec2.EC2Cloud#log: Connecting to null on port 22, with timeout 10000.
Observe the "null" instead of a valid IP address.