Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-16529

Provide the ability to turn off one time force sync after a failed build

    XMLWordPrintable

    Details

    • Similar Issues:

      Description

      We are currently trying to lighten load on our perforce server, and one contributor to this load is jobs doing unnecessary force syncs.

      I may be wrong (looked at the source and couldn't find anything that was doing this) but I seem to remember whenever a build fails it will trigger a "One time force sync" on the next build.

      Perhaps this is a result of not being on the latest perforce plugin (we are on 1.3.18), or perhaps I am totally wrong and this never existed. If it does exist, I would like a way to configure it on/off on a per build basis.

      Thanks for the great plugin and support!

        Attachments

          Activity

          Hide
          rpetti Rob Petti added a comment -

          This simply isn't the case. The plugin will only force sync once with the "one time force sync" option. After the sync is completed, it disables it. The result of the build is never checked, and the one-time force sync is never re-enabled.

          Show
          rpetti Rob Petti added a comment - This simply isn't the case. The plugin will only force sync once with the "one time force sync" option. After the sync is completed, it disables it. The result of the build is never checked, and the one-time force sync is never re-enabled.
          Hide
          pmaccamp Patrick McKeown added a comment -

          Hmm ok thanks, I'll have to take a deeper look into the behavior we are seeing.

          Show
          pmaccamp Patrick McKeown added a comment - Hmm ok thanks, I'll have to take a deeper look into the behavior we are seeing.
          Hide
          pmaccamp Patrick McKeown added a comment -

          Looked into this more, found this

          http://jenkins.361315.n4.nabble.com/Perforce-force-syncing-for-no-reason-td2993189.html

          It's not super obvious (I had issues similar to yours for a while), but Hudson automatically cleans up old workspaces that it thinks are stale... which works for some SCMs, but does not work well for Perforce at all in a configuration with multiple slaves.

          (Basically what happens, I think: Two slaves A and B are in use. The last build of job Foo was on A, while the last build on B was old in the eyes of the Workspace Cleanup thingy. The workspace for build B is cleaned. Job Foo has a submitted change. Hudson polls with Perforce [on slave A] and finds there is a change. If slave A's executors are unavailable, the build is started on B. As far as Perforce knows, the workspace on B is there and only needs the updates since it's last old build. But, in fact, because it has been cleaned up, that's not enough.)

          You are probably aware of this, but thought I'd share.

          Show
          pmaccamp Patrick McKeown added a comment - Looked into this more, found this http://jenkins.361315.n4.nabble.com/Perforce-force-syncing-for-no-reason-td2993189.html It's not super obvious (I had issues similar to yours for a while), but Hudson automatically cleans up old workspaces that it thinks are stale... which works for some SCMs, but does not work well for Perforce at all in a configuration with multiple slaves. (Basically what happens, I think: Two slaves A and B are in use. The last build of job Foo was on A, while the last build on B was old in the eyes of the Workspace Cleanup thingy. The workspace for build B is cleaned. Job Foo has a submitted change. Hudson polls with Perforce [on slave A] and finds there is a change. If slave A's executors are unavailable, the build is started on B. As far as Perforce knows, the workspace on B is there and only needs the updates since it's last old build. But, in fact, because it has been cleaned up, that's not enough.) You are probably aware of this, but thought I'd share.
          Hide
          rpetti Rob Petti added a comment -

          That issue was fixed back in 1.3.11. Rather than triggering a force-sync, the plugin flushes the deleted workspace's client spec to revision 0, so it will only resync the workspace that was deleted. I can only see this causing issues if for some reason you have configured your plugin to share the same client spec across multiple slaves.

          Show
          rpetti Rob Petti added a comment - That issue was fixed back in 1.3.11. Rather than triggering a force-sync, the plugin flushes the deleted workspace's client spec to revision 0, so it will only resync the workspace that was deleted. I can only see this causing issues if for some reason you have configured your plugin to share the same client spec across multiple slaves.

            People

            Assignee:
            rpetti Rob Petti
            Reporter:
            pmaccamp Patrick McKeown
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

              Dates

              Created:
              Updated:
              Resolved: