-
Improvement
-
Resolution: Fixed
-
Major
-
None
The performance of the performance plugin is very poor when working with larger data-sets. The plugin effectively becomes unusable with data sets that are any bigger than very small.
In this issue, I'm presenting my findings, as well as a description of the fixes that I've applied. I'm putting together a GitHub pull request for all of the code-changes to be merged with the official repository.
After analysis, I've identified the following problems area's:
- Disc IO is off the charts
- CPU load is off the charts (well ok, under 100%, but way to high)
- Memory consumption is off the charts
For illustration purposes: one of our jobs generates 16650 HTTP samples per iteration. The build history currently has a little under 200 builds. To generate Graphs, more than a gigabyte of memory is needed in the Jenkins JVM. Graph generation takes very, very long (so long for each graph that typically, either OOM occurred, or users gave up and aborted the page load). What is particularly problematic is that not only do our graphs not display, but the build-server itself becomes unworkable for quite some time.
- is duplicated by
-
JENKINS-20848 Performance Plugin 1.9 happen OutOfMemoryError when had many data.
-
- Closed
-
-
JENKINS-22385 Scalability issue storing individual results in memory
-
- Closed
-
-
JENKINS-9522 [Performance] - Display graph
-
- Closed
-
-
JENKINS-9690 Cache jmeter parsing results
-
- Closed
-