-
Bug
-
Resolution: Unresolved
-
Major
-
None
-
Powered by SuggestiMate -
Blue Ocean 1.2-beta2, Blue Ocean - Candidates
There have been occasional reports of high memory usage when keeping blue ocean open in a tab for many many hours.
It isn't clear what page yet, but it is worth investigating:
- Dashboard main page
- Activity page for a given build
Ideally in cases where there are running builds. What should be noted is if there is monotonically increasing memory usage for the tab. If there is - likely candidates are SSE events loading data into a store (something like mobx) which probably should be more of a buffer...
[JENKINS-45094] Memory usage on dashboard, activity pages after browser is open a long time
My experience has been with Safari on macOS Sierra.
I switched my Jenkins site to Blue ocean and set myself a reminder in IRC that I had (tang^) and noticed after 2.5 hours I had 2.5 GB of RAM in use. By the end of my day (7 hours of running Safari), the tab was at 4.5 GB.
My day was one of watching the dashboard, looking at activity pages, bouncing back to classic view for administration.
Chrome Dev Tools -> Memory, take heap snapshots before and after (I would imagine that a few hours elapsed time would be plenty). We want to look for anything with an unusually high "object counts" of course.
It could be the data cache, although those objects are usually small compared to anything you find in the DOM. My guess might be that there could be DOM elements leaking when navigating between views.
krachynski did you happen to open the run details much during your use of the app?
Yes I did. I was trying to debug build errors and had to read the logs.
krachynski did you leave it open while build running? or just open once at end?
Both. Sometimes I'd watch for the error message; sometimes I'd get pulled into a distraction of some sort and look after a job was complete.
thanks krachynski. If you get a chance to try in chrome one day like cliffmeyers mentioned above - would be helpful.
cliffmeyers looks like it doesn't matter which page is kept open really.
I've left my dashboard open on both Safari and Chrome for several hours now. Chrome shows 33MB usage before and after; Safari is using 2.86GB.
I did just discover the Safari Debug menu (enable with defaults write com.apple.Safari IncludeInternalDebugMenu 1) and have started a Memory Sampler. Hopefully this turns up something.
thanks krachynski - that would be most helpful. Interesting about chrome (I tried for an hour or so yesterday with no luck). I am not familiar with the safari tools myself, but given you have found them, and can see the problem, seeing before/after could be very helpful in finding out what it is.
Well, I'm not familiar with these tools either, but I'll share what I can.
The interesting tidbit I have right now is that after an hour following a restart of Safari to enable the internal debug menu, Safari is using 928MB of RAM and, of that, 530MB is in a GC heap. GC is labeled as not scheduled.
Oh, I got the following dialog on shutting down all my Safari windows. I don't know if these are all related to the Jenkins tab or not. I will try tomorrow to just open a single window watching Jenkins Blue Ocean and see if this dialog pops up again.
ok. I know jamesdumay is a keen safari user - I wonder if he has seen this?
Okay, with just my Jenkins page open and a couple mis-clicks into this site to look up Jenkins issues, I have a more concise world leaks dialog. 5 JavaScript global objects and 1 WKFrameRef. I don't know if JIRA was responsible for any of that, but there you go.
Interesting. Haven't run into this on Safari but then again I don't tend to leave tabs open for long.
Something we've got to take a look at. I know our stores are not LRU's so its not surprising that, over time, memory use increases.
Okay today's effort was minimal. Switched to Blue Ocean and left that tab in the background for 2.5 hours. Usage ended up at 3.4GB without touching anything on that tab.
krachynski I am not sure how to interpret any of that from safari - the internet seems fairly opaque on how to examine memory usage on safari vs chrome. So chrome doesnt' do this to you? so we can narrow it down to safari?
If so, it looks like there can be snapshots taken: https://stackoverflow.com/questions/42546652/using-safari-web-inspector-to-debug-memory-leak
Given this is reproducible for you in safari:
- Do before/after snapshots (somehow, if you can work that out with safari)
- Try on a different jenkins instance (pehraps some throw away one) - it would be interesting if it is data related (likely is, due to volume)
michaelneale, to be honest, I wasn't too sure either. I have some time right now to poke around at this and can fire up a docker instance as well. Thanks for that link; I now see there are lots of tools available in Safari to look at.
Chrome didn't expose this behaviour when I tried it last, but I don't like using Chrome so I'd rather work on fixing this.
I suspect you have to do it with Xcode instruments. Downloading now to try that out.
We have Firefox users reporting high memory usage as well. They also tested and reproduced high memory on Chrome. Please see bug 7915695 for a quick summary. The memory reports show a high amount of "orphaned nodes" which are parts of the site that are not displayed but are still referenced (probably via JavaScript).
In Firefox you can use about:memory to investigate the memory usage as well as our heap snapshot feature in devtool.
thanks erahm that seems useful, now it can be seen.
cliffmeyers mind if I ping this one at you? we seem to be triangulating in on the problem, like you said seems orphaned nodes not data in the mobx stores causing this.
Hi,
I have provided the info to the bug in Eric's comment
feel free to ping me for additional info/logs/memory dump/etc
thanks a lot
I reported a similar issue with Firefox via https://bugzilla.mozilla.org/show_bug.cgi?id=751557 years ago. The problem might still happen for our Jenkins instance given that we are behind updating it, and might shut it down soon. But at the time when I tried to investigate we got less help, so I stopped the investigation.
We never used the beforementioned plugin, but I had the feeling that things got worse when lots of jobs got added via the Jenkins API. This might be unrelated to what you see here, but I just wanted to give this information.
Every day I have to restart FF cause of it takes 16Gb RAM. Viewing job pipeline stages in about 10 tabs. Memory usage the same in chrome.
I can give you all needed info and data. Also you can get it from the ticket https://bugzilla.mozilla.org/show_bug.cgi?id=791695#c33
whimboo slavik334 can you confirm this is with blue ocean or if it is the 'classic' stage view? (ie boxes?)
At least in my case it's the classic view. We never used Blue Ocean. Also our CI system is still on Jenkins 1.580.3, so it may not even be available. Please note that I cannot help with running a recent version of Jenkins given that it would require lots of work, and as I said above we are shutting down the service soon. But all the code is available here: https://github.com/mozilla/mozmill-ci. If all this should go into a different issue please note and I can file one.
whimboo yes probably not this ticket in that case - you can open it against the core component ideally.
ok so it seems all the new information is in the classic stage view...
Just adding a data point in case it helps:
On linux/Firefox, I can see a running build page grow from ~40Mb to over 2Gb over the course of an hour or so. the majority of that space, ~1.6Gb is listed under dom/orphan-nodes.
Jenkins 2.190.1
Firefox 60.9.0esr
pwhoriskey, if you could install a recent Nightly build of Firefox 72.0 and install the profiler addon from https://profiler.firefox.com/, you could create a profile which might help to figure out where the leak actually is happening. See the documentation in how to create and upload a profile: https://profiler.firefox.com/docs/#/
cliffmeyers kzantow - do you have tips for how to sniff out things we suspect could leak space - I am thinking some of the stores that load based on events of things - they probably just keep appending right?