-
Bug
-
Resolution: Unresolved
-
Major
-
None
-
Plugin version 1.42
Using 1.39 of the plugin, the behaviour I observed was that when the instance cap was set to 2 and 2 slaves were already provisioned (e.g. in the stopped EC2 state rather than terminated), when a slave is required for a pending queued build, the plugin used to simply start one of the 2 stopped instances as required.
Now, on 1.42, the plugin appears to leave the previous 2 EC2 instances in the stopped state on AWS and provision a brand new slave, leading to it exceeding the instance cap of 2 and there now being 3 EC2 instances (albeit 2 stopped and only 1 running).
For now, my workaround was to go back to 1.39.
Are the nodes in stop state created before the update ?
From 1.39 to 1.41 has been changed the tag labeling, in your case if the nodes was created before the upgrade the label associated is not recognize by the new version.
What is reported in the log in term of the number of stopped instances ?