It would be useful to be able to include the 'force re-analysis' option in a pipeline job in the case of an analysis failure (for example, due to a misconfiguration).

      We plan to use Anchore mostly as a quality gate with Jenkins.  Currently if an analysis failure occurs, the image must be manually unsubscribed, and then deleted before the Jenkins job can be re-run.

      Ideally, with the 'force' option, the problem could be fixed and the job re-run without any manual intervention.

          [JENKINS-47541] Support --force option for Anchore analysis

          Daniel Nurmi added a comment -

          Tim,

          We were discussing and are considering that another approach would be to alter the anchore-engine service itself such that, if an image was in 'analysis_failed' state, and a new 'add' comes in, it pops back to 'not_analyzed' and the system tries again (which is exactly what '–force' does today).  That way, you wouldn't have to set an option in Jenkins plugin at all, and we think it might be an even more convenient way to handle this scenario - what do you think about this solution?

          Best

          -Dan

          Daniel Nurmi added a comment - Tim, We were discussing and are considering that another approach would be to alter the anchore-engine service itself such that, if an image was in 'analysis_failed' state, and a new 'add' comes in, it pops back to 'not_analyzed' and the system tries again (which is exactly what '–force' does today).  That way, you wouldn't have to set an option in Jenkins plugin at all, and we think it might be an even more convenient way to handle this scenario - what do you think about this solution? Best -Dan

          Tim Webster added a comment -

          Hi,

          That sounds like it would be ideal, except that I don't know what your rationale was for the current behaviour in the first place?  Was there a reason for not doing this anyway?  Would this change break it?

          Also, how does this work for images tagged with 'latest'?  We don't currently enforce a non-latest tag policy (although we probably will in some circumstances).  If we run a job that overwrites an image tagged with 'latest', will the analysis run again, or will it just use the previous analysis result?

          The reason we don't enforce this is because a lot of our images are also 'tool' images (e.g. base images for running CI builds, or things like Jenkins slave images).  It's not so important that they are tagged with a version.  Personally for me the jury is still out on whether this is good practice or not - I think for software releases a non-latest tag is required, but these other types of images it's much more convenient to just use 'latest'.

          Tim Webster added a comment - Hi, That sounds like it would be ideal, except that I don't know what your rationale was for the current behaviour in the first place?  Was there a reason for not doing this anyway?  Would this change break it? Also, how does this work for images tagged with 'latest'?  We don't currently enforce a non-latest tag policy (although we probably will in some circumstances).  If we run a job that overwrites an image tagged with 'latest', will the analysis run again, or will it just use the previous analysis result? The reason we don't enforce this is because a lot of our images are also 'tool' images (e.g. base images for running CI builds, or things like Jenkins slave images).  It's not so important that they are tagged with a version.  Personally for me the jury is still out on whether this is good practice or not - I think for software releases a non-latest tag is required, but these other types of images it's much more convenient to just use 'latest'.

          Daniel Nurmi added a comment -

          Hi Tim,

           

          The analysis_failed state was a relatively recent introduction to the set of image analysis states, which replaced the original behavior where an image analysis was simply retried over and over (returned to not_analyzed - initial state - on any failure).  We changed that behavior early on, since it could create a condition where a lot of work could be happening even though the image couldn't be analyzed (for any of a number of reasons) in favor of the current behavior, where analysis_failed is its own state that can be handled explicitly.  The suggestion I've made is in line with this model - the system wouldn't automatically retry analysis on failure, but it would retry analysis if an image is in analysis_failed state and another request to analyze the image was made explicitly by jenkins - in other words, we believe this is the right approach in general and would also we believe eliminate the need for a 'force' option in the jenkins plugin as an added benefit, and also would maintain current behavior where, if an image is already analyzed and jenkins asks for the image to be analyzed again, it amounts to a no-op on the service (where a force would  cause it to go through heavy-weight analysis again even though it would yield the same result).  Sounds like this is a good solution - we'll move ahead with adding that behavior to anchore-engine!

          To your second question - when add a tag to the anchore-engine service for analysis, the service will look up the most recent image digest associated with that tag be querying the registry.  If the digest is unchanged, then the system keeps the image in whatever state it is currently in (analyzed, analyzing, not_analyzed) and the new behavior would only flip the state from analysis_failed to not_analyzed to reset the analysis workflow.  If the image digest associated with a tag is not in the anchore-engine database at the time of request to add an image to anchore-engine, then a new image record is created for the new digest and image analysis workflow begins.  In other words, at any point in time, when one adds an image to anchore-engine using a tag identifier, the system attempts to get the most recent image digest associated with that tag and adds it to the system unless it is already present.

          Note that this is the behavior for adding an image - when interacting with the anchore-engine service directly (via API of using the anchore-engine CLI, outside of jenkins) you can always 'get' any image that is known to anchore-engine by digest or imageId (to get specific reports for specific image content identified by digest or imageId), and if you 'get' an image from anchore-engine using a tag, then you will get back the most recent image digest/imageid data associated with that tag (that anchore-engine has stored).

          Best Regards,

          -Dan

           

          Daniel Nurmi added a comment - Hi Tim,   The analysis_failed state was a relatively recent introduction to the set of image analysis states, which replaced the original behavior where an image analysis was simply retried over and over (returned to not_analyzed - initial state - on any failure).  We changed that behavior early on, since it could create a condition where a lot of work could be happening even though the image couldn't be analyzed (for any of a number of reasons) in favor of the current behavior, where analysis_failed is its own state that can be handled explicitly.  The suggestion I've made is in line with this model - the system wouldn't automatically retry analysis on failure, but it would retry analysis if an image is in analysis_failed state and another request to analyze the image was made explicitly by jenkins - in other words, we believe this is the right approach in general and would also we believe eliminate the need for a 'force' option in the jenkins plugin as an added benefit, and also would maintain current behavior where, if an image is already analyzed and jenkins asks for the image to be analyzed again, it amounts to a no-op on the service (where a force would  cause it to go through heavy-weight analysis again even though it would yield the same result).  Sounds like this is a good solution - we'll move ahead with adding that behavior to anchore-engine! To your second question - when add a tag to the anchore-engine service for analysis, the service will look up the most recent image digest associated with that tag be querying the registry.  If the digest is unchanged, then the system keeps the image in whatever state it is currently in (analyzed, analyzing, not_analyzed) and the new behavior would only flip the state from analysis_failed to not_analyzed to reset the analysis workflow.  If the image digest associated with a tag is not in the anchore-engine database at the time of request to add an image to anchore-engine, then a new image record is created for the new digest and image analysis workflow begins.  In other words, at any point in time, when one adds an image to anchore-engine using a tag identifier, the system attempts to get the most recent image digest associated with that tag and adds it to the system unless it is already present. Note that this is the behavior for adding an image - when interacting with the anchore-engine service directly (via API of using the anchore-engine CLI, outside of jenkins) you can always 'get' any image that is known to anchore-engine by digest or imageId (to get specific reports for specific image content identified by digest or imageId), and if you 'get' an image from anchore-engine using a tag, then you will get back the most recent image digest/imageid data associated with that tag (that anchore-engine has stored). Best Regards, -Dan  

          Addressed by commits ce3f61a and 3ab7360

          Swathi Gangisetty added a comment - Addressed by commits ce3f61a and 3ab7360

            swathigangisetty Swathi Gangisetty
            timwebster9 Tim Webster
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

              Created:
              Updated:
              Resolved: