We have a problem with Pipeline jobs which starts on master node and then are redirecting to a specific slaves. They are starting and then they are waiting on last executor doing nothing for about 5-10 minutes (see 01.png picture), after this time the job is redirecting to a specific slave and checkout starts.
On our master node we have jobs "***Generator" which are generating jobs through DSL-plugin and they are starting automatically by SVN polling. On our master node we have about 20 free executors( see 02.png picture). When they are all free pipeline jobs do not block themselves, but if on master node are running 1 or more DSL jobs, Pipeline jobs are waiting in queue on last executor and they do nothing for about 5-10 minutes and after this time job is redirecting to a specific slave and process continuous normally.
Why one DSL job is blocking all Pipeline jobs even if there are 19 free executors ?
UPDATE:
On 03.png you can see how deploy jobs cumulate below last free executor ( they are not even assigned to a specific executor, they just wait below last master executor)
- job-dsl:1.63
I deduced that the problem of blocking executors by DSL-jobs is related to increasing build time trend. After jenkins restart dsl jobs are working slower and slower. See 04.png and 05.png pictures ( this two screens present build time trend for two different job generators). If more and more generators are working longer time, after one week from Jenkins restart dsl-generator are working very long time on master slave. If there are at least 2 of them and estimated time to generate job is about 7-10 minutes, other pipelines job are waiting in queue on below last executor.
So the problem now is why dsl-jobs build time trend is increasing after Jenkins restart.