Sorry yes, 2.11 (corrected). Got confused on my versions since we have been upgrading quite a few things.
I agree our network setup is a bit non-standard. It's this way so we have a clear distinction between an actual private network that we use to communicate with other servers, and a "public" network for all non-LAN access. It is less important for Jenkins but for production, it means we are not mixing any customer traffic with private support services and, thus, we can more easily trust the private network. The other reason it is like this is, in part, because we don't have enough IPv4 address to give every VM a public IP but want to avoid most of the trainwreck that is NATing to one network (the "public"). Our private cloud provider does not yet support IPv6 addresses directly to VMs. If we could do that, we would likely have a true public network (with actual public IPs assigned to VMs) and a true private network (with a private v4 and unique-local v6).
From the standpoint of JClouds, it's seeing two networks which look to be private since that's true in the specific sense. But in practice, the Jenkins master can only see these servers over the network we consider as actually private.
For some concrete numbers, we have two networks, "inside" and "outside" that both have private IPs (again, the NAT monster for IPv4). Sometimes Jenkins tries to SSH to the "outside" network when it should always SSH into the inside. I had thought I could beat this with cloud-init but wasn't really able to come up with much there. The networks should always be the same (eth0 = outside, eth1 = inside) though.
Finally, yes 75% of the time it seemed to pick the wrong IP. Although more recent test show it's more like 50/50. If Jenkins tried all IPs that would eventually fix it but it didn't seem like it was doing that (if it can't connect to the outside IP, it will continue to try the outside IP rather than trying the other IP).
Sorry yes, 2.11 (corrected). Got confused on my versions since we have been upgrading quite a few things.
I agree our network setup is a bit non-standard. It's this way so we have a clear distinction between an actual private network that we use to communicate with other servers, and a "public" network for all non-LAN access. It is less important for Jenkins but for production, it means we are not mixing any customer traffic with private support services and, thus, we can more easily trust the private network. The other reason it is like this is, in part, because we don't have enough IPv4 address to give every VM a public IP but want to avoid most of the trainwreck that is NATing to one network (the "public"). Our private cloud provider does not yet support IPv6 addresses directly to VMs. If we could do that, we would likely have a true public network (with actual public IPs assigned to VMs) and a true private network (with a private v4 and unique-local v6).
From the standpoint of JClouds, it's seeing two networks which look to be private since that's true in the specific sense. But in practice, the Jenkins master can only see these servers over the network we consider as actually private.
For some concrete numbers, we have two networks, "inside" and "outside" that both have private IPs (again, the NAT monster for IPv4). Sometimes Jenkins tries to SSH to the "outside" network when it should always SSH into the inside. I had thought I could beat this with cloud-init but wasn't really able to come up with much there. The networks should always be the same (eth0 = outside, eth1 = inside) though.
Finally, yes 75% of the time it seemed to pick the wrong IP. Although more recent test show it's more like 50/50. If Jenkins tried all IPs that would eventually fix it but it didn't seem like it was doing that (if it can't connect to the outside IP, it will continue to try the outside IP rather than trying the other IP).