-
Bug
-
Resolution: Fixed
-
Critical
-
None
-
Jenkins 1.534, plugin version 1.18, running on Ubuntu 12.10
When running jenkins on a traditional host and configuring to launch EC2 slaves with a new AWS account in any region other than us-east-1, the plugin successfully creates an instance but fails to connect or report the instance online.
The slave log shows the plugin is stuck attempting to SSH to the instance private IP, which is unreachable outside EC2. The node configure page correctly shows both public DNS and private IP.
The issue appears to be because new AWS accounts are automatically being migrated to use EC2-VPC. On new accounts, all instances in regions other than us-east-1 are launched in a "default VPC". The default VPC combines behaviour from EC2-Classic and the original VPC implementation. In particular, although instances report they are part of a VPC they are still allocated a public DNS and public IP reachable from external hosts.
The plugin code in the EC2UnixLauncher assumes that any instance with a VPC ID does not have a public DNS and falls back to using the private IP for connection. This appears to be the cause of this issue.
As far as I can tell, there are now four potential cases:
Jenkins Master | Slave | Default connection |
Internet | EC2-Classic | Slave public DNS |
Internet | EC2-VPC with public IP | Slave public DNS |
EC2-VPC | EC2-VPC with public IP | Slave public DNS(1) |
EC2-VPC | EC2-VPC with no public IP | Slave private IP |
(1) When a master in a VPC resolves public DNS for an instance in the same VPC, Amazon automatically returns the private IP.
That suggests that the resolution should be to simplify the conditional in the EC2UnixLauncher and just connect to the public DNS if the instance has one, or the private IP if not.
I don't currently have a java/jenkins plugin build environment available, so I can't easily test this myself.
- is duplicated by
-
JENKINS-21182 EC2 Plugin: EC2 instance starts but waits forever
- Resolved