-
Improvement
-
Resolution: Unresolved
-
Major
-
None
I'd like to have some metrics with the performance test and have same comparision as performance plugin does.
https://www.dropbox.com/s/f2aseo2cj9vmr37/Screenshot%202014-03-13%2022.47.15.png
I have a job:
1. Shell
````
rm -f debug_atop.raw
cat <<EEND > monitor.sh
#!/bin/bash -x
atop -w debug_atop.raw 5 &
which stress && stress -v --cpu 5 --vm 5 --timeout 180 || sleep 180
- kill %1
EEND
chmod +x monitor.sh
./monitor.sh
atop -PCPU,MEM -r debug_atop.raw > atop.txt
#creates cpu.jtl and mem.jtl
curl -s https://dl.dropboxusercontent.com/u/15604911/atop_sampler.py | python
````
2. Publish Performance test result report
cpu.jtl
mem.jtl
You'll get what same as in the beggining: https://www.dropbox.com/s/f2aseo2cj9vmr37/Screenshot%202014-03-13%2022.47.15.png
This job is a prototype.
But what is not accurate now, that 'cpu' and 'memory' in ms and thresholds rules applied to them.
It would be nice being able to build custom metric reports, with aggregate medians and average to see correlation with system.
We run fast-load test on ec2 jenkins slaves with same config, and this correlation is important for us on the same page and using even CloudWatch is not convenient.
[JENKINS-22216] Custom Metrics with aggregation for load tests
Workflow | Original: JNJira [ 154296 ] | New: JNJira + In-Review [ 178754 ] |