We run a jenkins for SQLAlchemy (http://jenkins.sqlalchemy.org/) where in the vast majority of cases, builds run on the "master" (i.e. the base host, not an EC2 instance). We use label expressions, such as "mysql&&cpython3", to ensure each build can find a node and currently only the "master" satisfies the conditions for all but one build.
The one build that builds on EC2 is the "oracle" build, that needs its own machine/environment to get through the day. It uses the label expression "oracle" which is only matched by the EC2 configuration. But the oracle build is only manually started anyway. The whole idea is that we only need to start up this EC2 like once a week and pay pennies, and jenkins just does it for us.
But Jenkins spins up two EC2 instances whenever a new build across many projects starts up - even though none of them will actually run on those EC2s due to the labeling. So we pay for two EC2 instances to be up for 30 minutes, doing absolutely nothing, for every build during the day.
The EC2 plugin should only spin up based not just on load, but also if the current label expressions in the build queue will even match those nodes. If nothing's going to happen on them, they shouldn't spin up.