Jenkins master server with Openstack cloud plugin 1.8.
openstack plugin is configured with 1 instance type - "openstack-medium".
maximum number of openstack nodes of this type is set to 5.
master server has 0 executors.
no slave nodes are attached to master, so everything is expected to be executed on Openstack workers only.
I have 10 build jobs that are not restricted where to run.
now, I start a build job. I see in jenkins console that it creates and provisions a new VM. so far so good. the new VM is fully up and running with proper permissions and ssh access and ssh port open. all good.
but the job is still in "waiting for available executor" state...
then I see the plugin creates second worker...
then it creates all 5 (up to max number of 5) jenkins workers. they are all fully functional, but not doing anything!
the job is still "waiting for available executor".
then I see in jenkins console "cleaning up openstack nodes"... after 3 minutes (the time I set as retention policy for those nodes) those workers are deleted from both Jenkins and openstack.
then openstack cloud starts creating new nodes again... all 5 of them.
and they are not used again.
and goes forever without any jobs actually using any of the fully functional nodes.
now, if I edit a job and set "restrict where this can run" and give a label of one of openstack types (openstack-medium in this case), then this job immediately starts working on the openstack VM.
so, to summarize:
I assume that it is a bug that a job that is NOT restricted where to run DOES NOT run on an openstack worker that was created specifically for it.