-
Improvement
-
Resolution: Unresolved
-
Minor
-
None
I have some jobs that can't/shouldn't be run concurrently with any other, e.g. a test where we want to measure the runtime with a large input; or a job that housekeeps all the sandboxes which can cause file locking issues. Heavy job plugin with weight set equal to the number of executors fixed my problem but if the number of executors would go up then other jobs would get in in parallel.
I propose that a special value, like 0 or "max" or "all" can be put in job weight and the heavy job plugin will occupy all executors.
I analyzed this, because we also want to use all executors on a node.
Problem: When the job gets splitted in SubTasks, we don't know on which slave the job gets built next time yet. Each node can have different number of executors.
One first approach was, to only use maximum numbers of executors on jobs, that have a assigned label. This works as long as the assigned node has a unique label. When there are more nodes with the same label and different number of executors, this won't work.
Any ideas?