Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-40933

Optimize cold load time by concatinating common JS files

    • Icon: Task Task
    • Resolution: Unresolved
    • Icon: Minor Minor
    • blueocean-plugin
    • None

      Just like JENKINS-40932 we want to combine some files together to improve the browsers parallelise requests (we do too many requests - see the Waterfall report).

      These JS files should be concatinated into a single resource and served to the browser together:

      • react-router
      • react-redux
      • reselect
      • keymirror
      • redux-thunk
      • immutable

      This awesome tool provides some insight into what can be optimised (page speed + yslow + waterfall):

      https://gtmetrix.com/reports/ci.blueocean.io/ijzy6djd
      You can see from this, having fewer http requests wil allow the browser to do less "waterfall loading".

      Once we have this, we can revaluate on hi and low latency connections (low latency things are much worse than that tool shows, about 10x slower for cold load).

          [JENKINS-40933] Optimize cold load time by concatinating common JS files

          Tom FENNELLY added a comment - - edited

          This is not as easy as it might sound "in theory". Easiest way might be to bundle them directly in blueocean.js, which is what we used to do but then split them out because that was seen as a performance bottleneck. I feel like we get caught one way or another on these things .... we should try use fewer packages.

          Tom FENNELLY added a comment - - edited This is not as easy as it might sound "in theory". Easiest way might be to bundle them directly in blueocean.js , which is what we used to do but then split them out because that was seen as a performance bottleneck. I feel like we get caught one way or another on these things .... we should try use fewer packages.

          James Dumay added a comment -

          tfennelly blueocean.js is already 700k - I think all those files would increase to >800k (700k is too large anyway). Worth exploring what the options are here if possible before committing to it.

          James Dumay added a comment - tfennelly blueocean.js is already 700k - I think all those files would increase to >800k (700k is too large anyway). Worth exploring what the options are here if possible before committing to it.

          Michael Neale added a comment -

          if there was a way to test one or the other, could decide based on data (which we haven't been doing until now). Not easy, I know.

          Michael Neale added a comment - if there was a way to test one or the other, could decide based on data (which we haven't been doing until now). Not easy, I know.

          Tom FENNELLY added a comment -

          Yep .... NPM package usage explosion.

          I had been doing work on slimming down the generated bundle packages at one point. We parked it though, as other stuff was deemed more important. Some of that "might" help a bit, but bottom line is ... we have quite a few npm deps and some of them are quite hefty.

          Tom FENNELLY added a comment - Yep .... NPM package usage explosion. I had been doing work on slimming down the generated bundle packages at one point. We parked it though, as other stuff was deemed more important. Some of that "might" help a bit, but bottom line is ... we have quite a few npm deps and some of them are quite hefty.

          Michael Neale added a comment -

          tfennelly

          the gtfx score isn't so bad compared to many websites, where things fall apart seems to mostly be to do with latency, not so much size of artifacts.
          Once you introduce latency a bit, things slow down exponentially.
          not sure is that helps with sleuthing things down...

          Michael Neale added a comment - tfennelly the gtfx score isn't so bad compared to many websites, where things fall apart seems to mostly be to do with latency, not so much size of artifacts. Once you introduce latency a bit, things slow down exponentially. not sure is that helps with sleuthing things down...

          Tom FENNELLY added a comment -

          I'd be more inclined to leave this particular one until after we eliminate as many other requests as we can. All others would be lower hanging fruit.

          Tom FENNELLY added a comment - I'd be more inclined to leave this particular one until after we eliminate as many other requests as we can. All others would be lower hanging fruit.

          Tom FENNELLY added a comment -

          Tom FENNELLY added a comment - michaelneale ^^

          Tom FENNELLY added a comment -

          I'd be very interested to see how things perform if we put a http/2 enabled proxy on dogfood.

          Tom FENNELLY added a comment - I'd be very interested to see how things perform if we put a http/2 enabled proxy on dogfood.

          Tom FENNELLY added a comment - - edited

          ydubreuil is, at some point, going to enable http/2 on dogfood.

          Tom FENNELLY added a comment - - edited ydubreuil is, at some point, going to enable http/2 on dogfood.

          Tom FENNELLY added a comment -

          Moved to post-release, seeing as JENKINS-40992 has been moved there.

          Tom FENNELLY added a comment - Moved to post-release, seeing as JENKINS-40992 has been moved there.

          I reworked the HTTPS support for the sandbox, and I took the opportunity to add support for HTTP2 on it. This and TCP Fast open leads to a much less latency.

          I applied the work to dogfood too.

          Yoann Dubreuil added a comment - I reworked the HTTPS support for the sandbox, and I took the opportunity to add support for HTTP2 on it. This and TCP Fast open leads to a much less latency. I applied the work to dogfood too.

          Michael Neale added a comment - - edited

          From the "arse end of the world" a cold load is now about 12s, which seems a reasonable reduction. Collapsing those js files would get it to probably 10s, not huge, but something. So yeah, everything helps a bit. I think http2 is out of reach of a lot of people still, but it is good to know it makes a good difference.

          In terms of metrics: https://gtmetrix.com/reports/ci.blueocean.io/RUoAc9Hh is a bit better. Aiming for 1s by that measure so this is close...

          Other than that, there are likely other opportunities to speed up the "subjective" experience (time to showing something on screen, what was there previously.. but for another ticket).

          Michael Neale added a comment - - edited From the "arse end of the world" a cold load is now about 12s, which seems a reasonable reduction. Collapsing those js files would get it to probably 10s, not huge, but something. So yeah, everything helps a bit. I think http2 is out of reach of a lot of people still, but it is good to know it makes a good difference. In terms of metrics: https://gtmetrix.com/reports/ci.blueocean.io/RUoAc9Hh is a bit better. Aiming for 1s by that measure so this is close... Other than that, there are likely other opportunities to speed up the "subjective" experience (time to showing something on screen, what was there previously.. but for another ticket).

          Tom FENNELLY added a comment -

          You guys have to be on drugs

          Tom FENNELLY added a comment - You guys have to be on drugs

          Michael Neale added a comment -

          legal in california.

          But more seriously, chrome will only load 6 at once, so all 3 of these take on one loading "cascade". All we can do is experiment and iterate (now we can measure, thanks to PR builder).

          Michael Neale added a comment - legal in california. But more seriously, chrome will only load 6 at once, so all 3 of these take on one loading "cascade". All we can do is experiment and iterate (now we can measure, thanks to PR builder).

          I can enable caching for all BlueOcean static resources on the reverse proxy side, I think it could be interesting to see the effect. What is killing performances is latency and avoiding Jetty should be helpful. I'll configure this in sandbox.

          Yoann Dubreuil added a comment - I can enable caching for all BlueOcean static resources on the reverse proxy side, I think it could be interesting to see the effect. What is killing performances is latency and avoiding Jetty should be helpful. I'll configure this in sandbox.

          James Dumay added a comment -

          Could we hold off on any changes that might require users to setup their http proxy in a very specific way? We need to ensure this is as optimized as possible OOTB before looking at these higher level things

          James Dumay added a comment - Could we hold off on any changes that might require users to setup their http proxy in a very specific way? We need to ensure this is as optimized as possible OOTB before looking at these higher level things

          Configuration done. But it will not really be helpful, most things are already cached by the browser anyway. Could help a bit for cold cache maybe.

          Yoann Dubreuil added a comment - Configuration done. But it will not really be helpful, most things are already cached by the browser anyway. Could help a bit for cold cache maybe.

          Tom FENNELLY added a comment -

          as optimized as possible OOTB before looking at these higher level things

          Imo, we are at that point after we do the likes of JENKINS-40941 i.e. not what's being proposed/requested here.

          Tom FENNELLY added a comment - as optimized as possible OOTB before looking at these higher level things Imo, we are at that point after we do the likes of JENKINS-40941 i.e. not what's being proposed/requested here.

          Michael Neale added a comment -

          tfennelly how do we know without trying to combine the react stuff as mentioned here (is it not possible?)

          Michael Neale added a comment - tfennelly how do we know without trying to combine the react stuff as mentioned here (is it not possible?)

          I rolled back my changes. Anyway, given it was only useful for cold caches, it was not worth it.

          Yoann Dubreuil added a comment - I rolled back my changes. Anyway, given it was only useful for cold caches, it was not worth it.

          Michael Neale added a comment -

          ydubreuil if we could try it on some server, it would be worth it as cold is exactly the case we want ot optimise for right now.

          Michael Neale added a comment - ydubreuil if we could try it on some server, it would be worth it as cold is exactly the case we want ot optimise for right now.

          Tom FENNELLY added a comment - - edited

          how do we know without trying to combine the react stuff as mentioned here

          We have tried it before (see below). I suppose I'm just not too gone on fundamental build changes for what really seems to be an edge edge case. How many production setups run Jenkins on one side of the world and expect cold loading to be fast on a browser on the other side of the world? That's not a realistic OOTB use case imo, unless someone can show me otherwise.

          is it not possible?

          It is possible. We had it that way before and moved to splitting it out to see if async loading would help and it did seem to at the time.

          Tom FENNELLY added a comment - - edited how do we know without trying to combine the react stuff as mentioned here We have tried it before (see below). I suppose I'm just not too gone on fundamental build changes for what really seems to be an edge edge case. How many production setups run Jenkins on one side of the world and expect cold loading to be fast on a browser on the other side of the world? That's not a realistic OOTB use case imo, unless someone can show me otherwise. is it not possible? It is possible. We had it that way before and moved to splitting it out to see if async loading would help and it did seem to at the time.

          Michael Neale added a comment -

          tfennelly ack, yes I recall. Well I think I might close this. Do you have other things you would like to try that could help trim things in future?

          Michael Neale added a comment - tfennelly ack, yes I recall. Well I think I might close this. Do you have other things you would like to try that could help trim things in future?

          Tom FENNELLY added a comment -

          There was some work I was doing on bundling some time ago (that we parked) around making them "more accurate" in terms of what should be in them. I know that sounds vague (and I can explain more if you don't find it too boring - bundling is not all that exciting), but the upshot in terms of bundle sizes is expected to be varied in that some bundles will get slimmer (removal of modules that should not be in there) and some may get a bit heavier (addition of some modules that should be in there). All post 1.0 as it's not biting us atm.

          Tom FENNELLY added a comment - There was some work I was doing on bundling some time ago (that we parked) around making them "more accurate" in terms of what should be in them. I know that sounds vague (and I can explain more if you don't find it too boring - bundling is not all that exciting), but the upshot in terms of bundle sizes is expected to be varied in that some bundles will get slimmer (removal of modules that should not be in there) and some may get a bit heavier (addition of some modules that should be in there). All post 1.0 as it's not biting us atm.

          Michael Neale added a comment -

          SGTM

          Michael Neale added a comment - SGTM

            Unassigned Unassigned
            jamesdumay James Dumay
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

              Created:
              Updated: