Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-61051

Jobs are started on master instead of EC2 slaves randomly

    XMLWordPrintable

Details

    • EC2-Plugin 2.0.2

    Description

      Jenkins master runs on an AWS Linux 2. Jenkins uses the EC2 plugin to create slaves whenever needed and many jobs are assigned to slaves using the labels.

      Since upgrading to EC2 plugin 1.49 (and to Jenkins 2.217 which contains remoting 4.0) some jobs - randomly, it seems - are started on the master node instead of using the started slaves. The aws slave is started, but the workspace is created on master (in the user's home which should have been used on the slave). The job's console log says it is running on the slave but it is not true.

      Maybe this is not related to EC2 plugin as I don't see any change related to this problem in the 1.49 version's release history.

      Attachment: I created a snapshot about a node's script console page while - according to the Jenkins logs - it was used for building. I asked for the hostname and although the name of the node suggests it is a slave node, the hostname belongs to the master. And of course the workspace was created on master.

      Attachments

        Activity

          gaborv Gabor V created issue -
          gaborv Gabor V made changes -
          Field Original Value New Value
          Attachment Screenshot 2020-02-11 at 13.59.23.png [ 50188 ]
          Description Jenkins master runs on an AWS Linux 2. Jenkins uses the EC2 plugin to create slaves whenever needed and many jobs are assigned to slaves using the labels.

          Since upgrading to EC2 plugin 1.49 some jobs - randomly, it seems - are started on the master node instead of using the started slaves. The aws slave is started, but the workspace is created on master (in the user's home which should have been used on the slave). The job's console log says it is running on the slave but it is not true.

          Maybe this is not related to EC2 plugin as I don't see any change related to this problem in the 1.49 version's release history.
          Jenkins master runs on an AWS Linux 2. Jenkins uses the EC2 plugin to create slaves whenever needed and many jobs are assigned to slaves using the labels.

          Since upgrading to EC2 plugin 1.49 some jobs - randomly, it seems - are started on the master node instead of using the started slaves. The aws slave is started, but the workspace is created on master (in the user's home which should have been used on the slave). The job's console log says it is running on the slave but it is not true.

          Maybe this is not related to EC2 plugin as I don't see any change related to this problem in the 1.49 version's release history.

          Attachment: I created a snapshot about a node's script console page while - according to the Jenkins logs - it was used for building. I asked for the hostname and although the name of the node suggests it is a slave node, the hostname belongs to the master. And of course the workspace was created on master.
          gaborv Gabor V made changes -
          Assignee FABRIZIO MANFREDI [ thoulen ] Jeff Thompson [ jthompson ]
          gaborv Gabor V made changes -
          Component/s remoting [ 15489 ]
          Labels agents ec2 plugin slave agents ec2 plugin remoting slave
          gaborv Gabor V made changes -
          Description Jenkins master runs on an AWS Linux 2. Jenkins uses the EC2 plugin to create slaves whenever needed and many jobs are assigned to slaves using the labels.

          Since upgrading to EC2 plugin 1.49 some jobs - randomly, it seems - are started on the master node instead of using the started slaves. The aws slave is started, but the workspace is created on master (in the user's home which should have been used on the slave). The job's console log says it is running on the slave but it is not true.

          Maybe this is not related to EC2 plugin as I don't see any change related to this problem in the 1.49 version's release history.

          Attachment: I created a snapshot about a node's script console page while - according to the Jenkins logs - it was used for building. I asked for the hostname and although the name of the node suggests it is a slave node, the hostname belongs to the master. And of course the workspace was created on master.
          Jenkins master runs on an AWS Linux 2. Jenkins uses the EC2 plugin to create slaves whenever needed and many jobs are assigned to slaves using the labels.

          Since upgrading to EC2 plugin 1.49 (and to Jenkins 2.217 which contains remoting 4.0) some jobs - randomly, it seems - are started on the master node instead of using the started slaves. The aws slave is started, but the workspace is created on master (in the user's home which should have been used on the slave). The job's console log says it is running on the slave but it is not true.

          Maybe this is not related to EC2 plugin as I don't see any change related to this problem in the 1.49 version's release history.

          Attachment: I created a snapshot about a node's script console page while - according to the Jenkins logs - it was used for building. I asked for the hostname and although the name of the node suggests it is a slave node, the hostname belongs to the master. And of course the workspace was created on master.
          jthompson Jeff Thompson added a comment -

          This probably doesn't have anything to do with Remoting. It's probably something about the ec2-plugin not launching the job in the right place or using the desired agent configuration. My guess is that it's going to require additional diagnostics in order to track this down. Anything you can do to collect better troubleshooting data or a reproducible scenario would likely be necessary to resolving this.

          jthompson Jeff Thompson added a comment - This probably doesn't have anything to do with Remoting. It's probably something about the ec2-plugin not launching the job in the right place or using the desired agent configuration. My guess is that it's going to require additional diagnostics in order to track this down. Anything you can do to collect better troubleshooting data or a reproducible scenario would likely be necessary to resolving this.
          jthompson Jeff Thompson made changes -
          Assignee Jeff Thompson [ jthompson ]
          gaborv Gabor V added a comment -

          "something about the ec2-plugin not launching the job in the right place" - I thought Jenkins launches the job, the ec2-plugin just creates the slaves

          gaborv Gabor V added a comment - "something about the ec2-plugin not launching the job in the right place" - I thought Jenkins launches the job, the ec2-plugin just creates the slaves
          gaborv Gabor V made changes -
          Assignee Francis Upton [ francisu ]
          jthompson Jeff Thompson added a comment -

          I'm not familiar with the details of the ec2-plugin, but I know it does some complicated stuff, including with how it manages agents. When I've looked into the code there before, there was some complicated pieces. If you can reproduce the problems without the ec2-plugin then it probably is due to something in the Jenkins server. (Since I'm not familiar with any reports on that, it seems unlikely.) If it only occurs with the ec2-plugin, then it's probably something to do with the custom capabilities it provides.

          jthompson Jeff Thompson added a comment - I'm not familiar with the details of the ec2-plugin, but I know it does some complicated stuff, including with how it manages agents. When I've looked into the code there before, there was some complicated pieces. If you can reproduce the problems without the ec2-plugin then it probably is due to something in the Jenkins server. (Since I'm not familiar with any reports on that, it seems unlikely.) If it only occurs with the ec2-plugin, then it's probably something to do with the custom capabilities it provides.
          laszlog Laszlo Gaal added a comment -

          I have run into a similar problem with Jenkins v2.204.2 and EC2 plugin v1.49.1. In our case the master was actually overloaded by the misdirected job, and the Jenkins process was killed by the OOM-killer.

          One symptom I found was that the Jenkins log lines that normally log the connection attempt from the EC2 plugin to the newly created worker missed the IP address, printing "null" instead:

          Regular log entry:

          2020-03-10 04:47:57.202+0000 [id=797295]        INFO    hudson.plugins.ec2.EC2Cloud#log: Connecting to 172.31.26.224 on port 22, with timeout 10000. 

          Bad log line (only 2 instances in several weeks, immediately before the failure):

          2020-03-10 04:47:57.113+0000 [id=797326]        INFO    hudson.plugins.ec2.EC2Cloud#log: Connecting to null on port 22, with timeout 10000. 

          Observe the "null" instead of a valid IP address.

          laszlog Laszlo Gaal added a comment - I have run into a similar problem with Jenkins v2.204.2 and EC2 plugin v1.49.1. In our case the master was actually overloaded by the misdirected job, and the Jenkins process was killed by the OOM-killer. One symptom I found was that the Jenkins log lines that normally log the connection attempt from the EC2 plugin to the newly created worker missed the IP address, printing "null" instead: Regular log entry: 2020-03-10 04:47:57.202+0000 [id=797295] INFO hudson.plugins.ec2.EC2Cloud#log: Connecting to 172.31.26.224 on port 22, with timeout 10000. Bad log line (only 2 instances in several weeks, immediately before the failure): 2020-03-10 04:47:57.113+0000 [id=797326] INFO hudson.plugins.ec2.EC2Cloud#log: Connecting to null on port 22, with timeout 10000. Observe the "null" instead of a valid IP address.
          jthompson Jeff Thompson added a comment -

          That sounds like it is an issue in the EC2 plugin. Possibly a timing problem. Presumably if the IP address isn't specified it runs the job on the master.

          jthompson Jeff Thompson added a comment - That sounds like it is an issue in the EC2 plugin. Possibly a timing problem. Presumably if the IP address isn't specified it runs the job on the master.
          laszlog Laszlo Gaal added a comment -

          Just ran into this again. jthompson: yeah, it looks like either a timing problem or a race.

          As a workaround I installed roadblocks on the master that should fail such an errant job very early in the startup/config phase, before it has a chance to consume all memory and trigger an OOM-kill. We'll see if it's enough; I'd really hate to downgrade the plugin again.

          laszlog Laszlo Gaal added a comment - Just ran into this again. jthompson : yeah, it looks like either a timing problem or a race. As a workaround I installed roadblocks on the master that should fail such an errant job very early in the startup/config phase, before it has a chance to consume all memory and trigger an OOM-kill. We'll see if it's enough; I'd really hate to downgrade the plugin again.
          gaborv Gabor V added a comment -

           Any idea who can work on this bug from the ec2 plugin team? To whom should we assign it?

          gaborv Gabor V added a comment -  Any idea who can work on this bug from the ec2 plugin team? To whom should we assign it?

          EC2 just launches and manages agents it doesn't actually do anything with regards to assigning agents.
          That null does look suspicious.

          Does your master use the same pem as your agents? I'm assuming that your agents are linux and using ssh as well.

          raihaan Raihaan Shouhell added a comment - EC2 just launches and manages agents it doesn't actually do anything with regards to assigning agents. That null does look suspicious. Does your master use the same pem as your agents? I'm assuming that your agents are linux and using ssh as well.
          laszlog Laszlo Gaal added a comment -

          raihaan, yes, they do use the same keys, and I've realized that assigning different keys to them would be a useful workaround.

          However, I've never had this problem before upgrading to 1.49.1, so having the same keys does not caue the problem, although it makes the failing case that much more severe.

          laszlog Laszlo Gaal added a comment - raihaan , yes, they do use the same keys, and I've realized that assigning different keys to them would be a useful workaround. However, I've never had this problem before upgrading to 1.49.1, so having the same keys does not caue the problem, although it makes the failing case that much more severe.
          laszlog Laszlo Gaal added a comment - - edited

          Just saw https://github.com/jenkinsci/ec2-plugin/pull/447, which seems likely to fix this issue; one of the comments actually refers to the

          Connecting to null on port 22 

          pattern I described in an earlier comment.

          laszlog Laszlo Gaal added a comment - - edited Just saw https://github.com/jenkinsci/ec2-plugin/pull/447 , which seems likely to fix this issue; one of the comments actually refers to the Connecting to null on port 22 pattern I described in an earlier comment.
          laszlog Laszlo Gaal added a comment -

          Unfortunately I saw this again on Jenkins 2.346.3 with ec2 v1.68

          Symptoms are the same: "Connecting to null on port 22", then starting the job on the controller node.

          I wonder if adding a null check to https://github.com/jenkinsci/ec2-plugin/blob/master/src/main/java/hudson/plugins/ec2/ssh/EC2UnixLauncher.java#L430 (in addition to checking for "0.0.0.0" as the IP address) would be enough to force the controller to wait for the worker to come up and initialize fully.

          jthompson , I agree that this is most likely a problem with the ec2 plugin, not remoting, as this log is emitted still in the worker node's startup phase, before the job is handed to the worker. Could maybe thoulen, raihaan or julienduchesne offer some insight?

          My apologies if the direct ping goes against the etiquette; I tried, but could not find an ec2-maintainers alias.

          laszlog Laszlo Gaal added a comment - Unfortunately I saw this again on Jenkins 2.346.3 with ec2 v1.68 Symptoms are the same: "Connecting to null on port 22", then starting the job on the controller node. I wonder if adding a null check to https://github.com/jenkinsci/ec2-plugin/blob/master/src/main/java/hudson/plugins/ec2/ssh/EC2UnixLauncher.java#L430 (in addition to checking for "0.0.0.0" as the IP address) would be enough to force the controller to wait for the worker to come up and initialize fully. jthompson , I agree that this is most likely a problem with the ec2 plugin, not remoting, as this log is emitted still in the worker node's startup phase, before the job is handed to the worker. Could maybe thoulen , raihaan or julienduchesne offer some insight? My apologies if the direct ping goes against the etiquette; I tried, but could not find an ec2-maintainers alias.
          raihaan Raihaan Shouhell made changes -
          Assignee Francis Upton [ francisu ] Raihaan Shouhell [ raihaan ]
          raihaan Raihaan Shouhell made changes -
          Status Open [ 1 ] In Progress [ 3 ]

          Should be fixed in ec2 2.0.2

          raihaan Raihaan Shouhell added a comment - Should be fixed in ec2 2.0.2
          raihaan Raihaan Shouhell made changes -
          Status In Progress [ 3 ] In Review [ 10005 ]
          raihaan Raihaan Shouhell made changes -
          Released As EC2-Plugin 2.0.2
          Resolution Fixed [ 1 ]
          Status In Review [ 10005 ] Resolved [ 5 ]
          laszlog Laszlo Gaal added a comment -

          raihaan , thanks a lot for the quick fix and release, it was a very pleasant surprise indeed.

          I'll keep an eye on the scenario; hopefully this fix makes it go away for good.

          laszlog Laszlo Gaal added a comment - raihaan , thanks a lot for the quick fix and release, it was a very pleasant surprise indeed. I'll keep an eye on the scenario; hopefully this fix makes it go away for good.

          People

            raihaan Raihaan Shouhell
            gaborv Gabor V
            Votes:
            2 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: