Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-41205

Stage graph unsuitable for large and/or complex pipelines

    XMLWordPrintable

Details

    • Improvement
    • Status: Resolved (View Workflow)
    • Major
    • Resolution: Fixed
    • blueocean-plugin
    • None
    • Jenkins 2.40
      Blue Ocean 1.0.0-b17

    Description

      Improvement on roadmap

      This improvement is on the Blue Ocean project roadmap. Check the roadmap page for updates.

      The Blue Ocean stage graph is great for small, simple pipelines however it breaks down with many parallel builds. See attached screenshot for an example.

      Because 'stage' can no longer be nested within 'parallel', all of our steps must belong under a single 'Test' stage. We have 19 parallel jobs, which is not an uncommon number for iOS/Android development where many combinations of app, device and OS version need to be tested. We'd actually like to split some of the jobs into smaller chunks to take advantage of idle build agents, but this would greatly exacerbate the problem.

      Grouping jobs under multiple stages would improve the UI experience, but also drastically increase the runtime of our integration runs as stages are executed serially.

      I envision two possible solutions:

      1. Stages have a 'parallel' option that allows them to run at the same time as other parallel stages.
      2. A step is introduced that is used purely as an annotation for the purposes of rendering a more appropriate graph. Ideally the step would be deeply nestable allowing for complex graph hierarchies.

      Thanks for all the hard work on Blue Ocean, it's really shaping up nicely and I eagerly await each new release.

      Attachments

        Issue Links

          Activity

            kzantow Keith Zantow added a comment -

            benlangfeld if you submit a PR that fixes the issue it would absolutely be considered for inclusion; we'd just have a look and make sure tests pass, etc.. Submissions are always welcome!

            kzantow Keith Zantow added a comment - benlangfeld if you submit a PR that fixes the issue it would absolutely be considered for inclusion; we'd just have a look and make sure tests pass, etc.. Submissions are always welcome!
            cliffmeyers Cliff Meyers added a comment - - edited

            Seconded. benlangfeld I had looked at this problem in the past. Another option is to do successive fetches until all nodes / stages are loaded. If you look at the REST responses, you'll see there is pagination data written into a "Link" response header IIRC. That's a way to determine whether there is additional data to be fetched, and you could write some logic to grab say n=100 and just perform successive fetches until the Link header indicates there is no more data.

            We may want to be careful about doing a massive fetch up front (say n=500) as for complex pipelines this might have a perf impact server wide. I recall discussing this with vivek a while back, can you refresh my memory on whether it might be preferable to do a single large fetch (say n=500) or several smaller fetches (n=100) until all data is loaded? Intuitively fewer large fetches seems more efficient from client's perspective, but I seem to recall a concern with loading a large number of nodes concurrently in the context of a single request?

            cliffmeyers Cliff Meyers added a comment - - edited Seconded. benlangfeld I had looked at this problem in the past. Another option is to do successive fetches until all nodes / stages are loaded. If you look at the REST responses, you'll see there is pagination data written into a "Link" response header IIRC. That's a way to determine whether there is additional data to be fetched, and you could write some logic to grab say n=100 and just perform successive fetches until the Link header indicates there is no more data. We may want to be careful about doing a massive fetch up front (say n=500) as for complex pipelines this might have a perf impact server wide. I recall discussing this with vivek a while back, can you refresh my memory on whether it might be preferable to do a single large fetch (say n=500) or several smaller fetches (n=100) until all data is loaded? Intuitively fewer large fetches seems more efficient from client's perspective, but I seem to recall a concern with loading a large number of nodes concurrently in the context of a single request?
            benlangfeld Ben Langfeld added a comment -

            Following Link is precisely what I had in mind cliffmeyers. We'll prep a patch. Thanks everyone.

            benlangfeld Ben Langfeld added a comment - Following Link is precisely what I had in mind cliffmeyers . We'll prep a patch. Thanks everyone.
            michaelneale Michael Neale added a comment -

            benlangfeld just make sure you are nice and up to date with master as some recent changes were merged for how it follows along (may not affect you, just FYI). As for fetching the pages - absolutely why not, if you have a PR that would be wonderful. Go for it!

            michaelneale Michael Neale added a comment - benlangfeld just make sure you are nice and up to date with master as some recent changes were merged for how it follows along (may not affect you, just FYI). As for fetching the pages - absolutely why not, if you have a PR that would be wonderful. Go for it!
            benlangfeld Ben Langfeld added a comment -

            A patch to resolve this is proposed at https://github.com/jenkinsci/blueocean-plugin/pull/1517. I would appreciate a review, particularly from cliffmeyers.

            benlangfeld Ben Langfeld added a comment - A patch to resolve this is proposed at https://github.com/jenkinsci/blueocean-plugin/pull/1517.  I would appreciate a review, particularly from cliffmeyers .

            People

              benlangfeld Ben Langfeld
              ileitch Ian Leitch
              Votes:
              19 Vote for this issue
              Watchers:
              36 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: