-
Bug
-
Resolution: Fixed
-
Critical
-
None
1. Set up an "in demand" slave.
2. Set up a project that only builds on that slave
3. The slave will go offline
4. The SCM poll will see that the workspace is offline and trigger a new build
5. The slave comes online, the build completes
6. The slave goes back offline again as in 3.
7. And here the infinite build loop happens since we now send up back at 4
I'm not sure why Hudson is trying to get the content of the workspace? The perforce plugin knows the last change list number used for the last build, so it should also know by polling perforce that the last changlist number is different and so a build should be triggered?
In our environment each slave is actually a virtual machine controlled by scripts, after a build completes its taken offline so checking the workspace will NEVER work. This completely breaks our setup because builds are only set to trigger for SCM changes.
When a slave is brought back online the VM is reverted to its snapshot to ensure the build is "clean", so again this means checking the workspace content will always fail.
It's not that simple. The plugin tells hudson/jenkins that a workspace is required so that the polling will actually take place on the slave that the job is configured for. Each job is configured for the specific environment it runs in, so it's unreasonable to assume that the same configuration will work on the master (see my comments above...)
Until the plugin supports node-specific perforce configurations, this issue cannot be fixed.