-
Improvement
-
Resolution: Unresolved
-
Minor
I am building multiple editions of the same software package across multiple platforms (linux, windows, macos) and architectures (x86-64, aarch64) with a single declarative pipeline using a matrix. I trigger new builds using GitSCM polling based upon a received webhook on the master node if the refspec matches. Follow-on stages that actually perform the build take place on physical nodes that have tags matching the labels in the matrix.
My problem is that, in the Workflow Job Plugin (which, as far as I understand it, implements Pipelines), WorkflowJob::poll() does a for loop over the return value of perhapsCompleteBuild.checkouts(), which is a List<SCMCheckout> containing every checkout that was performed in the lastSuccessfulBuild. For my job, that's ~12+ different nodes, which seems excessive when I only need it to check once. Worse, if for any reason any node fails to have a baseline, it will trigger a build even if the refspec doesn't match. This results in the build immediately being successful, but with no artifacts, which completely sabotages any jobs that rely on having artifacts from the last successful build of this job.
This appears to be desired behavior for some reason? So, I am requesting an option that makes WorkflowJob::poll() iterate over the checkouts(), choose only the one that matches node(s) I specify, and supplies that to the for loop instead. This way, I can say, "when you perform the polling step, please only poll on the master node for changes". This will not only speed up the polling step, but also help avoid those spurious "empty" successful builds due to missing baselines on other nodes.