-
Bug
-
Resolution: Fixed
-
Major
-
None
-
Platform: All, OS: All
When the build-publisher transfers a job to the public Hudson instance, the
entire job configuration is copied verbatim, including the build triggers. This
is problematic as the published builds still attempt to query the SCM and place
themselves in the build queue on the public Hudson instance. Provided the job
is runnable on the public instance, the job will now be run twice! If it is not
runnable (e.g. tied to a label or node that the public instance doesn't know
about), then the job will sit in the public instance's build queue indefinitely.
I think there are two ways to correct this, but neither one is perfect:
1) add the following to the ExternalProjectProperty.doAcceptBuild() [r18636,
line 130]:
for(TriggerDescriptor trigger: project.getTriggers().keySet())
project.save();
2) use an xml filter to replace the <triggers> element in the job's original
config.xml with an empty element, before transmitting it to the public
instance (i.e. in PublisherThread.submitConfig()). [I can provide a patch that
implements this.]
The first option has the benefit of being concise and working through the
standard Hudson core API. However, the job arrives with its build triggers
intact and is loaded for a brief moment before the private instance transmits
the build (running doAcceptBuild() and removing the triggers). The second
option is logically cleaner: it never sends the triggers in the first place, but
it relies on direct hacking of the config.xml instead of working through the
Hudson API.