-
New Feature
-
Resolution: Unresolved
-
Major
-
None
-
Powered by SuggestiMate
As an example, consider the following:
Build 1 - Stable (within CPPCheck threshold)
Build 2 - Unstable (exceeds CPPCheck threshold)
Build 3 - Unstable (no change)
Build 4 - Unstable (no change)
If you view the CPPCheck results for build 4, it shows the resolved issues since the dawn of time, which is arguably incorrect (see attached image for an example of solved errors showing when the delta is 0), but more crucially, generates its delta for new errors against the previous build, regardless of the build status.
This creates the problem that, once build 2 drops off the history, you lose the deltas that help you identify the new errors that were introduced, because the deltas don't exist for build 3 or build 4 (they are zero).
There are one of two solutions I can think of:
1. Allow the previous build status for delta generation to be specified in the configuration: lastStableBuild, lastSuccessfulBuild
2. Hard code it to use the lastStableBuild only.
[JENKINS-24076] Incorrect builds used to generate deltas (No deltas for builds fixed after failure where errors were introduced)
I understand WHAT you want but I don't understand WHY you want it. Your usage patter is from the very special ones.
As I understand you have a set of issues reported by Cppcheck, let's say 100 of them, which you want to just ignore and never fix. So you set a threshold to make the build unstable if there is more than 100 issues. And you want to update the plugin to see all issues that don't belong to the 100 known ones in the latest build. You are also dropping the history off time to time so you are not sure which issues were the 100 original ones, but you know there were 100 of them. Correct?
My suggestion was to analyze that 100 issues and use --inline-suppr, --suppress or --suppressions-list cppcheck arguments to hide them. Cppcheck would report no issues instead of 100 now and any issue present in the latest build would belong to the "to fix" category. Does it make sense?
Surely, feel free to create a feature branch for this task and send a pull request.
Questions:
- What should happen in your proposal if unit tests (or whatever else) make the build unstable instead of Cppcheck?
- What to do with the missing history containing the last stable build, do you plan to change data model to store the deltas too?
- If somebody fixes one of the 100 original issues by happy accident, the threshold should be updated to 99. Newly introduced issue may be a serious memory leak or whatever but the build will be still reported as stable and the "delta with last stable build" approach will seriously fail.
That's just my two cents.
Well, now the WHAT is out of the way, if you care to hear the WHY, it might help you understand my blight.
In an ideal project scenario, you would set-up CPP check from the outset, ensuring you have very few errors. However, I joined a large system project and implemented CPP check very late in the day. There are some lines of code that are more than 30 years old! With that in mind, you might appreciate there being lots of warnings and errors. Unfortunately, there are thousands of errors as opposed to hundreds. Whilst we could ignore them, we actually want to fix them all, in time. I have filtered out about 10000 errors, leaving around 2500 errors I think should be fixed. However, time is not on our side, so what I have implemented is a Jenkins post build step that actually uses the Jenkins CLI to reduce the threshold after each build, whenever someone fixes a warning or error. This ensures we can't introduce any new errors and continue to fix the existing errors.
So as you might imagine, when there are thousands of errors, it's extremely difficult to see the wood through the trees, or as in our case, the new errors amongst the unchanged errors. We are getting there, but very slowly, which is why it's such a pain when a new error is introduced but not spotted before the next build is churned out by CI. With collaboration occurring round the clock and around the globe, it's difficult to co-ordinate commits with builds, resulting in builds very quickly pushing out the deltas before we've had a chance to look at them, especially over a weekend.
We're looking to increase the number of builds we keep, but we build deployment images as part of the build, which requires lots of space, so keeping more than ten becomes a bit tricky.
As I say, I'm happy to provide a generic solution, it'll just take me some time to fit around my usual workload.
I am looking at this issue and trying to build the plugin. I am seeing the following maven errors, can you help?
[WARNING] The POM for org.apache.maven:maven-plugin-api:jar:3.0.3 is invalid, transitive dependencies (if any) will not be available, enable debug logging for more details
...
[INFO] — maven-hpi-plugin:1.74:test-hpl (default-test-hpl) @ cppcheck —
[INFO] Generating /var/lib/jenkins/jobs/jenkinsci-cppcheck-plugin/workspace/target/test-classes/the.hpl
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 5.706s
[INFO] Finished at: Thu Sep 18 10:06:49 BST 2014
[INFO] Final Memory: 24M/347M
[INFO] ------------------------------------------------------------------------
Waiting for Jenkins to finish collecting data
[ERROR] Failed to execute goal org.jenkins-ci.tools:maven-hpi-plugin:1.74:test-hpl (default-test-hpl) on project cppcheck: Error preparing the manifest: Failed to open artifact org.apache.maven:maven-plugin-api:jar:3.0.3:test at /var/lib/jenkins/.m2/repository/org/apache/maven/maven-plugin-api/3.0.3/maven-plugin-api-3.0.3.jar: 1 problem was encountered while building the effective model
[ERROR] [FATAL] Non-parseable POM /var/lib/jenkins/.m2/repository/org/apache/maven/maven-plugin-api/3.0.3/maven-plugin-api-3.0.3.pom: only whitespace content allowed before start tag and not 0 (position: START_DOCUMENT seen 0... @1:1) @ line 1, column 1
[ERROR] for project for project
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
It's okay, I figured it out. I had previously run maven with the wrong settings file and it mis-configured its local repo. I removed the repo (/var/lib/jenkins/.m2/repository) and repeated the build and it worked fine.
I have noticed another "bug" recently. If you set the maximum new error threshold to 1 and you have a build where one new error is added, but at the same time, one is resolve, the delta is zero but the build is flagged as stable. This doesn't seem right.
My previous comment with regards to the bug. This is actually a side effect of the fact the plugin currently compares against the last build instead of the last stable build. If you get a stable build followed by a build that resolves one error and introduces one error, then this build is marked as unstable. However, if there is no change in the following build, the build status for the last build becomes stable again, because the number of new errors is zero, even though there has been one new error introduced since the last time the build was stable. This really does indicate that the comparison of new errors between each build is not right and that my original statement still stands, that the builds should always compare the current results against the last stable build. The last stable build is after all, the benchmark for where the cppcheck results need to be. If new errors have been introduced since then, then the build can't possibly be considered stable - ever.
This fix maintains backwards compatibility with the current strategy (which also remains the default). But it adds the option to change the behaviour such that it compares the current report against the last "stable" report.
https://github.com/iwonbigbro/cppcheck-plugin/commit/d639413bf2b06c47ffa14cf3e2fe8c70073faa76
Please see comments in your pull request, they should be solved before merge.
Is there any update on this issue? I'd really love to see this feature added!
Hi Michal
It's been a difficult one to explain this one
I understand that the data is dynamic. I have had to use this approach to force delta generation between the last stable build and the current build, by deleting all the intermediate builds. This issue I am having is that builds are churned out every ten minutes, which is faster than developers are able to fix the CPP check warnings that they have introduced. So what happens is, you get this intermediate build that sits between the last stable build and the current build. The current build shows no difference, so you can't use it to determine what new errors were introduced. You have to go back to the intermediate build to identify the errors.
But at the same time, the current most build shows what errors have been resolved since the last stable build. So they accumulate as errors are resolved across unstable builds. This is also odd, because what is interesting to me and my development team, is what warnings did we fix since the last unstable build? Also, how many errors are left from the warnings we introduced since the last stable build? Does this make sense?
So if build 1 is stable and I introduce 5 cpp check warnings in build 2 and someone else introduces 3 warnings in build 3 and another build comes out with no additional warnings (build 4), the delta is zero. If I delete build 3, the delta is 3, because it shows the warnings introduced between build 2 and 4. If I delete build 2, then I see what I would expect to have seen, that being the total number of errors since the last stable build (build 1). That said, without deleting any of the builds, if I fix one warning in build 5, it will continue to show 1 warning has been solved for all builds until the next stable build. I would expect it to show specifically the delta between the build that had the error and the build that resolved it, but nothing outside the scope of this.
I don't necessarily think there is anything wrong with the cppcheck report, it all comes down to how the plugin represents the results between builds. The simple solution for me, would be to swap the comparisons such that resolved errors are shown since the last build and new errors are since the last "stable" build, which is the inverse of what I am currently seeing. The more complicated solution would be to have this as a configurable set of options, allowing the job to define what builds are used to represent the deltas.
I hope this clarifies my issue and what I am asking for?
If not, I will need to fork the plugin and make the changes and submit a pull request so I can demonstrate it to you.