Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-40933

Optimize cold load time by concatinating common JS files

    XMLWordPrintable

    Details

    • Similar Issues:
    • Epic Link:

      Description

      Just like JENKINS-40932 we want to combine some files together to improve the browsers parallelise requests (we do too many requests - see the Waterfall report).

      These JS files should be concatinated into a single resource and served to the browser together:

      • react-router
      • react-redux
      • reselect
      • keymirror
      • redux-thunk
      • immutable

      This awesome tool provides some insight into what can be optimised (page speed + yslow + waterfall):

      https://gtmetrix.com/reports/ci.blueocean.io/ijzy6djd
      You can see from this, having fewer http requests wil allow the browser to do less "waterfall loading".

      Once we have this, we can revaluate on hi and low latency connections (low latency things are much worse than that tool shows, about 10x slower for cold load).

        Attachments

          Issue Links

            Activity

            jamesdumay James Dumay created issue -
            jamesdumay James Dumay made changes -
            Field Original Value New Value
            Epic Link JENKINS-37957 [ 174099 ]
            jamesdumay James Dumay made changes -
            Sprint post-release [ 181 ]
            jamesdumay James Dumay made changes -
            Rank Ranked higher
            jamesdumay James Dumay made changes -
            Sprint post-release [ 181 ] tethys [ 161 ]
            jamesdumay James Dumay made changes -
            Rank Ranked lower
            jamesdumay James Dumay made changes -
            Assignee Tom FENNELLY [ tfennelly ]
            jamesdumay James Dumay made changes -
            Rank Ranked higher
            jamesdumay James Dumay made changes -
            Priority Minor [ 4 ] Critical [ 2 ]
            michaelneale Michael Neale made changes -
            Description Just like JENKINS-40932 we want to combine some files together to improve the browsers parallelise requests (we do too many requests - see the [Waterfall|https://gtmetrix.com/reports/ci.blueocean.io/kV6QiMBy] report).

            These JS files should be concatinated into a single resource and served to the browser together:
            * react-router
            * react-redux
            * reselect
            * keymirror
            * redux-thunk
            * immutable

            Just like JENKINS-40932 we want to combine some files together to improve the browsers parallelise requests (we do too many requests - see the [Waterfall|https://gtmetrix.com/reports/ci.blueocean.io/kV6QiMBy] report).

            These JS files should be concatinated into a single resource and served to the browser together:
            * react-router
            * react-redux
            * reselect
            * keymirror
            * redux-thunk
            * immutable


            This awesome tool provides some insight into what can be optimised (page speed + yslow + waterfall):

            https://gtmetrix.com/reports/ci.blueocean.io/ijzy6djd
            You can see from this, having fewer http requests wil allow the browser to do less "waterfall loading".

            Once we have this, we can revaluate on hi and low latency connections (low latency things are much worse than that tool shows, about 10x slower for cold load).



            jamesdumay James Dumay made changes -
            Priority Critical [ 2 ] Major [ 3 ]
            Hide
            tfennelly Tom FENNELLY added a comment - - edited

            This is not as easy as it might sound "in theory". Easiest way might be to bundle them directly in blueocean.js, which is what we used to do but then split them out because that was seen as a performance bottleneck. I feel like we get caught one way or another on these things .... we should try use fewer packages.

            Show
            tfennelly Tom FENNELLY added a comment - - edited This is not as easy as it might sound "in theory". Easiest way might be to bundle them directly in blueocean.js , which is what we used to do but then split them out because that was seen as a performance bottleneck. I feel like we get caught one way or another on these things .... we should try use fewer packages.
            Hide
            jamesdumay James Dumay added a comment -

            Tom FENNELLY blueocean.js is already 700k - I think all those files would increase to >800k (700k is too large anyway). Worth exploring what the options are here if possible before committing to it.

            Show
            jamesdumay James Dumay added a comment - Tom FENNELLY blueocean.js is already 700k - I think all those files would increase to >800k (700k is too large anyway). Worth exploring what the options are here if possible before committing to it.
            Hide
            michaelneale Michael Neale added a comment -

            if there was a way to test one or the other, could decide based on data (which we haven't been doing until now). Not easy, I know.

            Show
            michaelneale Michael Neale added a comment - if there was a way to test one or the other, could decide based on data (which we haven't been doing until now). Not easy, I know.
            Hide
            tfennelly Tom FENNELLY added a comment -

            Yep .... NPM package usage explosion.

            I had been doing work on slimming down the generated bundle packages at one point. We parked it though, as other stuff was deemed more important. Some of that "might" help a bit, but bottom line is ... we have quite a few npm deps and some of them are quite hefty.

            Show
            tfennelly Tom FENNELLY added a comment - Yep .... NPM package usage explosion. I had been doing work on slimming down the generated bundle packages at one point. We parked it though, as other stuff was deemed more important. Some of that "might" help a bit, but bottom line is ... we have quite a few npm deps and some of them are quite hefty.
            michaelneale Michael Neale made changes -
            Attachment mr.jpg [ 35429 ]
            Hide
            michaelneale Michael Neale added a comment -

            Tom FENNELLY

            the gtfx score isn't so bad compared to many websites, where things fall apart seems to mostly be to do with latency, not so much size of artifacts.
            Once you introduce latency a bit, things slow down exponentially.
            not sure is that helps with sleuthing things down...

            Show
            michaelneale Michael Neale added a comment - Tom FENNELLY the gtfx score isn't so bad compared to many websites, where things fall apart seems to mostly be to do with latency, not so much size of artifacts. Once you introduce latency a bit, things slow down exponentially. not sure is that helps with sleuthing things down...
            Hide
            tfennelly Tom FENNELLY added a comment -

            I'd be more inclined to leave this particular one until after we eliminate as many other requests as we can. All others would be lower hanging fruit.

            Show
            tfennelly Tom FENNELLY added a comment - I'd be more inclined to leave this particular one until after we eliminate as many other requests as we can. All others would be lower hanging fruit.
            Hide
            tfennelly Tom FENNELLY added a comment -
            Show
            tfennelly Tom FENNELLY added a comment - Michael Neale ^^
            Hide
            tfennelly Tom FENNELLY added a comment -

            I'd be very interested to see how things perform if we put a http/2 enabled proxy on dogfood.

            Show
            tfennelly Tom FENNELLY added a comment - I'd be very interested to see how things perform if we put a http/2 enabled proxy on dogfood.
            tfennelly Tom FENNELLY made changes -
            Attachment Screenshot 2017-01-11 12.19.04.png [ 35431 ]
            Hide
            tfennelly Tom FENNELLY added a comment - - edited

            Yoann Dubreuil is, at some point, going to enable http/2 on dogfood.

            Show
            tfennelly Tom FENNELLY added a comment - - edited Yoann Dubreuil is, at some point, going to enable http/2 on dogfood.
            tfennelly Tom FENNELLY made changes -
            Link This issue is blocked by JENKINS-40992 [ JENKINS-40992 ]
            tfennelly Tom FENNELLY made changes -
            Priority Major [ 3 ] Minor [ 4 ]
            tfennelly Tom FENNELLY made changes -
            Rank Ranked lower
            tfennelly Tom FENNELLY made changes -
            Sprint tethys [ 161 ] post-release [ 181 ]
            Hide
            tfennelly Tom FENNELLY added a comment -

            Moved to post-release, seeing as JENKINS-40992 has been moved there.

            Show
            tfennelly Tom FENNELLY added a comment - Moved to post-release, seeing as JENKINS-40992 has been moved there.
            Hide
            ydubreuil Yoann Dubreuil added a comment -

            I reworked the HTTPS support for the sandbox, and I took the opportunity to add support for HTTP2 on it. This and TCP Fast open leads to a much less latency.

            I applied the work to dogfood too.

            Show
            ydubreuil Yoann Dubreuil added a comment - I reworked the HTTPS support for the sandbox, and I took the opportunity to add support for HTTP2 on it. This and TCP Fast open leads to a much less latency. I applied the work to dogfood too.
            jamesdumay James Dumay made changes -
            Sprint post-release [ 181 ] pannonian [ 211 ]
            Hide
            michaelneale Michael Neale added a comment - - edited

            From the "arse end of the world" a cold load is now about 12s, which seems a reasonable reduction. Collapsing those js files would get it to probably 10s, not huge, but something. So yeah, everything helps a bit. I think http2 is out of reach of a lot of people still, but it is good to know it makes a good difference.

            In terms of metrics: https://gtmetrix.com/reports/ci.blueocean.io/RUoAc9Hh is a bit better. Aiming for 1s by that measure so this is close...

            Other than that, there are likely other opportunities to speed up the "subjective" experience (time to showing something on screen, what was there previously.. but for another ticket).

            Show
            michaelneale Michael Neale added a comment - - edited From the "arse end of the world" a cold load is now about 12s, which seems a reasonable reduction. Collapsing those js files would get it to probably 10s, not huge, but something. So yeah, everything helps a bit. I think http2 is out of reach of a lot of people still, but it is good to know it makes a good difference. In terms of metrics: https://gtmetrix.com/reports/ci.blueocean.io/RUoAc9Hh is a bit better. Aiming for 1s by that measure so this is close... Other than that, there are likely other opportunities to speed up the "subjective" experience (time to showing something on screen, what was there previously.. but for another ticket).
            michaelneale Michael Neale made changes -
            Sprint pannonian [ 211 ] tethys [ 161 ]
            michaelneale Michael Neale made changes -
            Rank Ranked lower
            Hide
            tfennelly Tom FENNELLY added a comment -

            You guys have to be on drugs

            Show
            tfennelly Tom FENNELLY added a comment - You guys have to be on drugs
            Hide
            michaelneale Michael Neale added a comment -

            legal in california.

            But more seriously, chrome will only load 6 at once, so all 3 of these take on one loading "cascade". All we can do is experiment and iterate (now we can measure, thanks to PR builder).

            Show
            michaelneale Michael Neale added a comment - legal in california. But more seriously, chrome will only load 6 at once, so all 3 of these take on one loading "cascade". All we can do is experiment and iterate (now we can measure, thanks to PR builder).
            Hide
            ydubreuil Yoann Dubreuil added a comment -

            I can enable caching for all BlueOcean static resources on the reverse proxy side, I think it could be interesting to see the effect. What is killing performances is latency and avoiding Jetty should be helpful. I'll configure this in sandbox.

            Show
            ydubreuil Yoann Dubreuil added a comment - I can enable caching for all BlueOcean static resources on the reverse proxy side, I think it could be interesting to see the effect. What is killing performances is latency and avoiding Jetty should be helpful. I'll configure this in sandbox.
            Hide
            jamesdumay James Dumay added a comment -

            Could we hold off on any changes that might require users to setup their http proxy in a very specific way? We need to ensure this is as optimized as possible OOTB before looking at these higher level things

            Show
            jamesdumay James Dumay added a comment - Could we hold off on any changes that might require users to setup their http proxy in a very specific way? We need to ensure this is as optimized as possible OOTB before looking at these higher level things
            Hide
            ydubreuil Yoann Dubreuil added a comment -

            Configuration done. But it will not really be helpful, most things are already cached by the browser anyway. Could help a bit for cold cache maybe.

            Show
            ydubreuil Yoann Dubreuil added a comment - Configuration done. But it will not really be helpful, most things are already cached by the browser anyway. Could help a bit for cold cache maybe.
            Hide
            tfennelly Tom FENNELLY added a comment -

            as optimized as possible OOTB before looking at these higher level things

            Imo, we are at that point after we do the likes of JENKINS-40941 i.e. not what's being proposed/requested here.

            Show
            tfennelly Tom FENNELLY added a comment - as optimized as possible OOTB before looking at these higher level things Imo, we are at that point after we do the likes of JENKINS-40941 i.e. not what's being proposed/requested here.
            Hide
            michaelneale Michael Neale added a comment -

            Tom FENNELLY how do we know without trying to combine the react stuff as mentioned here (is it not possible?)

            Show
            michaelneale Michael Neale added a comment - Tom FENNELLY how do we know without trying to combine the react stuff as mentioned here (is it not possible?)
            Hide
            ydubreuil Yoann Dubreuil added a comment -

            I rolled back my changes. Anyway, given it was only useful for cold caches, it was not worth it.

            Show
            ydubreuil Yoann Dubreuil added a comment - I rolled back my changes. Anyway, given it was only useful for cold caches, it was not worth it.
            Hide
            michaelneale Michael Neale added a comment -

            Yoann Dubreuil if we could try it on some server, it would be worth it as cold is exactly the case we want ot optimise for right now.

            Show
            michaelneale Michael Neale added a comment - Yoann Dubreuil if we could try it on some server, it would be worth it as cold is exactly the case we want ot optimise for right now.
            Hide
            tfennelly Tom FENNELLY added a comment - - edited

            how do we know without trying to combine the react stuff as mentioned here

            We have tried it before (see below). I suppose I'm just not too gone on fundamental build changes for what really seems to be an edge edge case. How many production setups run Jenkins on one side of the world and expect cold loading to be fast on a browser on the other side of the world? That's not a realistic OOTB use case imo, unless someone can show me otherwise.

            is it not possible?

            It is possible. We had it that way before and moved to splitting it out to see if async loading would help and it did seem to at the time.

            Show
            tfennelly Tom FENNELLY added a comment - - edited how do we know without trying to combine the react stuff as mentioned here We have tried it before (see below). I suppose I'm just not too gone on fundamental build changes for what really seems to be an edge edge case. How many production setups run Jenkins on one side of the world and expect cold loading to be fast on a browser on the other side of the world? That's not a realistic OOTB use case imo, unless someone can show me otherwise. is it not possible? It is possible. We had it that way before and moved to splitting it out to see if async loading would help and it did seem to at the time.
            Hide
            michaelneale Michael Neale added a comment -

            Tom FENNELLY ack, yes I recall. Well I think I might close this. Do you have other things you would like to try that could help trim things in future?

            Show
            michaelneale Michael Neale added a comment - Tom FENNELLY ack, yes I recall. Well I think I might close this. Do you have other things you would like to try that could help trim things in future?
            michaelneale Michael Neale made changes -
            Sprint tethys [ 161 ]
            michaelneale Michael Neale made changes -
            Assignee Tom FENNELLY [ tfennelly ]
            Hide
            tfennelly Tom FENNELLY added a comment -

            There was some work I was doing on bundling some time ago (that we parked) around making them "more accurate" in terms of what should be in them. I know that sounds vague (and I can explain more if you don't find it too boring - bundling is not all that exciting), but the upshot in terms of bundle sizes is expected to be varied in that some bundles will get slimmer (removal of modules that should not be in there) and some may get a bit heavier (addition of some modules that should be in there). All post 1.0 as it's not biting us atm.

            Show
            tfennelly Tom FENNELLY added a comment - There was some work I was doing on bundling some time ago (that we parked) around making them "more accurate" in terms of what should be in them. I know that sounds vague (and I can explain more if you don't find it too boring - bundling is not all that exciting), but the upshot in terms of bundle sizes is expected to be varied in that some bundles will get slimmer (removal of modules that should not be in there) and some may get a bit heavier (addition of some modules that should be in there). All post 1.0 as it's not biting us atm.
            Hide
            michaelneale Michael Neale added a comment -

            SGTM

            Show
            michaelneale Michael Neale added a comment - SGTM

              People

              Assignee:
              Unassigned Unassigned
              Reporter:
              jamesdumay James Dumay
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Dates

                Created:
                Updated: