• Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Minor Minor
    • docker-workflow-plugin
    • Jenkins 2.19.3, pipeline-model-definition-plugin 0.6

      When using a Dockerfile to set up the build environment (as added in JENKINS-39216), a docker image is built and tagged with the SHA1 hash of the Dockerfile. Nothing seems to remove this image once the build competes, so disk space on the docker node will eventually be exhausted.

          [JENKINS-40723] Built Dockerfile images are never removed

          Andrew Bayer added a comment -

          Interesting question here - how do we handle this? Do we want to delete the image after every build? michaelneale, jamesdumay, hrmpw - thoughts?

          Andrew Bayer added a comment - Interesting question here - how do we handle this? Do we want to delete the image after every build? michaelneale , jamesdumay , hrmpw - thoughts?

          Patrick Wolf added a comment -

          This isn't specific to pipeline-model-definition though is it? This would be true of any build using a dockerfile, no? abayer

          Patrick Wolf added a comment - This isn't specific to pipeline-model-definition though is it? This would be true of any build using a dockerfile, no? abayer

          Andrew Bayer added a comment -

          Yes, but in Declarative, we possibly generate a lot more of 'em without the user ever having to really think about it.

          Andrew Bayer added a comment - Yes, but in Declarative, we possibly generate a lot more of 'em without the user ever having to really think about it.

          Patrick Wolf added a comment -

          building the image for every build kind of defeats the purpose doesn't it? Then it would be wiser to just build the image once and publish it to docker hub to use with a regular docker statement.

          I think this might fall under general workspace management more than something to be solved in Declarative. We could add some sugar to clean workspace in post but beyond that what can we do?

          Patrick Wolf added a comment - building the image for every build kind of defeats the purpose doesn't it? Then it would be wiser to just build the image once and publish it to docker hub to use with a regular docker statement. I think this might fall under general workspace management more than something to be solved in Declarative. We could add some sugar to clean workspace in post but beyond that what can we do?

          Josh Sleeper added a comment -

          Preemptive apology for the novel of a comment, but I wanted to be detailed with my thoughts.


          hrmpw is absolutely right, building the image every time does mostly defeat the purpose, but being able to store our Docker definition alongside our pipeline definition is exceedingly convenient for both our dev and QA in my experience so far.

          This may or may not be hard to do from your end (I'm not too familiar with the code managing the Dockerfile interactions), but here's the way I imagine the Dockerfile flow in declarative pipeline could work:


          Each declarative pipeline job run using a Dockerfile would retain 1 or more pairs of fingerprints, where each pair would contain a Dockerfile fingerprint and the fingerprint of the Docker image built from said Dockerfile.

          Thus, for each declarative pipeline job run that utilizes Dockerfiles there are two possible paths to follow:

          1. The Dockerfile fingerprint does match the fingerprint from the previous job run, meaning that ideally we shouldn't rebuild unless we have to.
            To determine that, we check the current node for the image fingerprint from the previous job run:
            1. If the current node does have an image that matches the image fingerprint from the previous job run, just run that image and continue with the job.
            2. If the current node doesn't have an image that matches the image fingerprint from the previous job run, then we logically need to build it on the current node even though the Dockerfile itself hasn't changed.
          2. The Dockerfile fingerprint doesn't match the fingerprint from the previous job run, meaning we should rebuild and clean up previously created Docker images if present.
            Just like the first path we check the current node for the image fingerprint from the previous job run, but this time we focus on cleanup:
            1. If the current node does have an image that matches the image fingerprint from the previous job run, remove that image from the current node and then build and run like normal.
            2. If the current node doesn't have an image that matches the image fingerprint from the previous job run, then we've at least done our due diligence to clean up and just build and run like normal.

          Keeping Dockerfile and Docker image fingerprints associated as a pair ensures that you can selectively remove or rebuild per Dockerfile used, and removing images only relative to fingerprints from the last job run handles what I'm guessing is the common case for Dockerfile image management.


          I think this gives us the best overall user-friendliness for Dockerfiles in the declarative pipeline syntax, following the mentality that users generally shouldn't have to think too much about managing or selecting the nodes they're utilizing (or what arbitrary Docker images they create).

          Let me know what you think or if I obviously missed something!

          Josh Sleeper added a comment - Preemptive apology for the novel of a comment, but I wanted to be detailed with my thoughts. hrmpw is absolutely right, building the image every time does mostly defeat the purpose, but being able to store our Docker definition alongside our pipeline definition is exceedingly convenient for both our dev and QA in my experience so far. This may or may not be hard to do from your end (I'm not too familiar with the code managing the Dockerfile interactions), but here's the way I imagine the Dockerfile flow in declarative pipeline could work: Each declarative pipeline job run using a Dockerfile would retain 1 or more pairs of fingerprints , where each pair would contain a Dockerfile fingerprint and the fingerprint of the Docker image built from said Dockerfile . Thus, for each declarative pipeline job run that utilizes Dockerfiles there are two possible paths to follow: The Dockerfile fingerprint does match the fingerprint from the previous job run, meaning that ideally we shouldn't rebuild unless we have to. To determine that, we check the current node for the image fingerprint from the previous job run: If the current node does have an image that matches the image fingerprint from the previous job run, just run that image and continue with the job. If the current node doesn't have an image that matches the image fingerprint from the previous job run, then we logically need to build it on the current node even though the Dockerfile itself hasn't changed. The Dockerfile fingerprint doesn't match the fingerprint from the previous job run, meaning we should rebuild and clean up previously created Docker images if present. Just like the first path we check the current node for the image fingerprint from the previous job run, but this time we focus on cleanup: If the current node does have an image that matches the image fingerprint from the previous job run, remove that image from the current node and then build and run like normal. If the current node doesn't have an image that matches the image fingerprint from the previous job run, then we've at least done our due diligence to clean up and just build and run like normal. Keeping Dockerfile and Docker image fingerprints associated as a pair ensures that you can selectively remove or rebuild per Dockerfile used, and removing images only relative to fingerprints from the last job run handles what I'm guessing is the common case for Dockerfile image management. I think this gives us the best overall user-friendliness for Dockerfiles in the declarative pipeline syntax, following the mentality that users generally shouldn't have to think too much about managing or selecting the nodes they're utilizing (or what arbitrary Docker images they create). Let me know what you think or if I obviously missed something!

          At the moment, the Dockerfile builds are fairly quick after the first build, as Docker's cache is used to avoid rebuilding any Dockerfile steps that haven't changed. This caching would be defeated if the built image was deleted after each build (unless the Docker node has similar images that are not managed by Jenkins).

          Determining whether an image should be rebuilt is not necessarily as simple as checking whether the Dockerfile has changed, or that a previous image built for that same Dockerfile still exists. I think this logic should be left to Docker, and not replicated in Jenkins.

          My suggestion would be the following:

          • When building a Dockerfile, tag the resulting image with the job name/Id instead of the Dockerfile hash.
          • Before building a Dockerfile, check for an existing image using the job's tag. If an image exists, note the image ID.
          • After building a Dockerfile, check the image ID of the current image with the job's tag. If the image ID has changed (i.e. the tag has moved), delete the old image.

          If a job's name is changed, or a job is deleted, there may be some images left orphaned in the Docker node. However, it will be much easier for an admin to clean these up in the future, as the tags would make it obvious what the images were used for.

          Gavin Llewellyn added a comment - At the moment, the Dockerfile builds are fairly quick after the first build, as Docker's cache is used to avoid rebuilding any Dockerfile steps that haven't changed. This caching would be defeated if the built image was deleted after each build (unless the Docker node has similar images that are not managed by Jenkins). Determining whether an image should be rebuilt is not necessarily as simple as checking whether the Dockerfile has changed, or that a previous image built for that same Dockerfile still exists. I think this logic should be left to Docker, and not replicated in Jenkins. My suggestion would be the following: When building a Dockerfile, tag the resulting image with the job name/Id instead of the Dockerfile hash. Before building a Dockerfile, check for an existing image using the job's tag. If an image exists, note the image ID. After building a Dockerfile, check the image ID of the current image with the job's tag. If the image ID has changed (i.e. the tag has moved), delete the old image. If a job's name is changed, or a job is deleted, there may be some images left orphaned in the Docker node. However, it will be much easier for an admin to clean these up in the future, as the tags would make it obvious what the images were used for.

          James Dumay added a comment -

          Playing devils advocate here but should we really be in the business of docker image lifecycle management? There are tools like docker-gc that can take care of all of that for you.

          James Dumay added a comment - Playing devils advocate here but should we really be in the business of docker image lifecycle management? There are tools like docker-gc that can take care of all of that for you.

          Josh Sleeper added a comment -

          jamesdumay
          I can totally see what you're saying, but to some extent I kinda think yes.

          To me, part of the beauty of using Dockerfile(s) in the Declarative Pipeline syntax is that it really does allow me to just stop caring about my nodes. I don't care about their platform, I don't care about what they have installed (beyond Docker, of course), and I don't care about managing a complete pre-built Docker image somewhere.

          Not caring too much about the images I generate that way seems like it fits right in to that mentality.

          Here's the perspective I think many people might end up seeing this from.


          As someone who is a Jenkins user but not a Jenkins admin, working with a pool of generic nodes with Docker installed, I don't want to be that person who used up all of a slave/node's disk space because:

          1. I didn't have permission to access the nodes directly and clean up my old images
          2. I didn't know how to clean up my old images
          3. I didn't have time or care enough to clean up my old images

          One solution, just like you suggested, is to run something like docker-gc on each and every node in the pool with a regular cadence.
          Frankly, to manage a whole pool of Docker nodes like I'm thinking, that may very well have to be something we do anyway and that would just become part of the requirements to be a Docker node.

          I'm just not sure if everyone else thinks that running something like docker-gc totally separate from the Jenkins job creating the images is a suitable solution.
          Does that make sense, or am I missing something still?

          Josh Sleeper added a comment - jamesdumay I can totally see what you're saying, but to some extent I kinda think yes. To me, part of the beauty of using Dockerfile(s) in the Declarative Pipeline syntax is that it really does allow me to just stop caring about my nodes. I don't care about their platform, I don't care about what they have installed (beyond Docker, of course), and I don't care about managing a complete pre-built Docker image somewhere. Not caring too much about the images I generate that way seems like it fits right in to that mentality. Here's the perspective I think many people might end up seeing this from. As someone who is a Jenkins user but not a Jenkins admin, working with a pool of generic nodes with Docker installed, I don't want to be that person who used up all of a slave/node's disk space because: I didn't have permission to access the nodes directly and clean up my old images I didn't know how to clean up my old images I didn't have time or care enough to clean up my old images One solution, just like you suggested, is to run something like docker-gc on each and every node in the pool with a regular cadence. Frankly, to manage a whole pool of Docker nodes like I'm thinking, that may very well have to be something we do anyway and that would just become part of the requirements to be a Docker node. I'm just not sure if everyone else thinks that running something like docker-gc totally separate from the Jenkins job creating the images is a suitable solution. Does that make sense, or am I missing something still?

          Patrick Wolf added a comment -

          Based on a potential change for JENKINS-40866 it would be possible to run docker-gc in the post section of a Pipeline to clean up old images. Assuming docker-gc is installed on the agent.

          pipeline {
            agent {
              label 'mylabel'
            }
            stages {
              stage ("Build") {
                agent {
                  dockerfile {
                    filename 'Dockerfile'
                    reuseNode true
                  }
                }
                steps {
                  echo "Foo"
                }
              }
            }
            post {
              always {
                sh 'docker-gc'
              }
            }
          }
          

          The top-level agent allocates a node and then individual stages can run inside the container reusing the same workspace. When the Pipeline completes you always run docker-gc and clear out old images. (by default any image not used in the last hour is cleared.)

          Patrick Wolf added a comment - Based on a potential change for JENKINS-40866 it would be possible to run docker-gc in the post section of a Pipeline to clean up old images. Assuming docker-gc is installed on the agent. pipeline { agent { label 'mylabel' } stages { stage ( "Build" ) { agent { dockerfile { filename 'Dockerfile' reuseNode true } } steps { echo "Foo" } } } post { always { sh 'docker-gc' } } } The top-level agent allocates a node and then individual stages can run inside the container reusing the same workspace. When the Pipeline completes you always run docker-gc and clear out old images. (by default any image not used in the last hour is cleared.)

          Jesse Glick added a comment -

          I think PMD should do the equivalent of

          node {
            def img = docker.build('whatever')
            try {
              …
            } finally {
              sh "docker rmi ${img.id}"
            }
          }
          

          Jesse Glick added a comment - I think PMD should do the equivalent of node { def img = docker.build( 'whatever' ) try { … } finally { sh "docker rmi ${img.id}" } }

          James Dumay added a comment -

          James Dumay added a comment - I think Docker has this built in now https://docs.docker.com/engine/reference/commandline/system_prune/ https://docs.docker.com/engine/reference/commandline/container_prune/

          Andrew Bayer added a comment -

          My call here is that Docker image management is out of scope for Declarative to deal with.

          Andrew Bayer added a comment - My call here is that Docker image management is out of scope for Declarative to deal with.

          James Dumay added a comment -

          Good call abayer

          James Dumay added a comment - Good call abayer

          I would like to disagree at a higher level.  The obfuscatingly named (a hash of some sort) images are being created by Jenkins and because they are using such an obfuscating name as a hash makes trying to manage the accumulation of them outside of Jenkins difficult at best.

          For example, to go along with the idea that it is up to me, the admin (or a script I write) to remove old, no-longer used images, which represent past iterations of {{Dockerfile}}s in my pipeline jobs, which of the following images are current and which are older iterations?

          # docker image ls
          REPOSITORY                                 TAG                 IMAGE ID            CREATED             SIZE
          <none>                                     <none>              ff479182ebeb        4 days ago          1.42 GB
          a831c15b87dbd4231d6947a16cb3a9af816cbd8d   latest              ac3742854a96        4 days ago          1.42 GB
          e8cd3c8b78088b6c52ee705ae10be4201359a5b3   latest              ac3742854a96        4 days ago          1.42 GB
          4249d3c8084d55fc3d2a99132a0f9bce66f6e5ec   latest              ac3742854a96        4 days ago          1.42 GB
          8447a9d1402c6562bdfc27950aada59b7b1e0593   latest              ce155bde643b        4 days ago          1.62 GB
          a1a0b41dfa855a77be010b372402702114bc1225   latest              ce155bde643b        4 days ago          1.62 GB
          a677138400e3274c0f3ccacb0d6919a892df06fb   latest              ce155bde643b        4 days ago          1.62 GB
          280f646bcd85076af3f7cb58afc3c221a426ad4c   latest              60e033d80253        4 days ago          1.76 GB
          a2ec752628e0decf2e09a39fabfd1e08afdbe67f   latest              60e033d80253        4 days ago          1.76 GB
          cbc1131e470e384e815c278b456f8ce695657a89   latest              60e033d80253        4 days ago          1.76 GB
          24e62ec336d1449f98fb7474a8496af0707c0588   latest              caf0d9564161        7 days ago          1.42 GB
          c761e9f12da336e348736f8c32e83e53d0c41ae4   latest              caf0d9564161        7 days ago          1.42 GB
          0b0baa88ac6d8ca0e4d1ec58b8d8ee475ba0978b   latest              1b923c4a3d19        7 days ago          1.45 GB
          79f5fc427029a540383e43637806c27ff56af029   latest              9515ddc1f273        7 days ago          1.6 GB
          <none>                                     <none>              af1113a0c00c        7 days ago          1.6 GB
          <none>                                     <none>              1622a837d8b6        3 weeks ago         1.62 GB
          46807bbb74e92fbdbe56fced9c109b8ed0367036   latest              353bbbbb4f1b        3 weeks ago         1.62 GB
          d7c383939c4a517002774159d046d7f310f881c8   latest              353bbbbb4f1b        3 weeks ago         1.62 GB
          4942b00624bbc67cb2ac9c1832200ce421962c2d   latest              3c768e8f5305        3 weeks ago         1.58 GB
          8b18a2c252deea8b6901d61877a13bd7567ba2f5   latest              40039dfa6469        3 weeks ago         1.6 GB
          7b402774ab1d76f7d3a240f53780cee70a2da39c   latest              78cd8f4f275e        4 weeks ago         1.55 GB
          d2c8168b9d4e27bd0426deb78972664c19e97e8b   latest              0ae5d7965ca6        4 weeks ago         1.61 GB
          56043f81efbf7289d621c112b82933220763fbb4   latest              340fddd2839f        4 weeks ago         1.45 GB
          115282b91e727adbf38cdeb8cdf6c1ef98be6593   latest              0b600893851a        4 weeks ago         1.41 GB
          3163e91161e120ac113c2f191742689178ae01ee   latest              e1d8225a6c03        4 weeks ago         1.44 GB
          6d5d2a2d75e1825fd624ceddfabaabc3d2dcbeb1   latest              e1d8225a6c03        4 weeks ago         1.44 GB
          65fbd7397d29d5649e5ac73664aeab9eedde969b   latest              2563f445defd        4 weeks ago         1.69 GB
          f880bb025268313fbd2fa186d75498129f92b622   latest              2563f445defd        4 weeks ago         1.69 GB
          e4858d566cbee1aa9779ab329d4582e27ee74e4f   latest              d474affa1aeb        4 weeks ago         1.28 GB
          0120a36aba35ee597208a684226d44cb569f0d43   latest              c76105e5b17a        4 weeks ago         1.28 GB
          dba890385ab8e5f278e432b53354dd96065e44b9   latest              c76105e5b17a        4 weeks ago         1.28 GB
          b449ce5eb8b256896bddfb05bd7bfc5befaabf5c   latest              6ef942624bea        4 weeks ago         1.28 GB
          1b063af35c6d4599b83d38e8c334afc65418bedf   latest              0e664a829edf        4 weeks ago         1.42 GB
          3bb0bab0fc5d15716dc2f627b18e6f9abec9cfd1   latest              0e664a829edf        4 weeks ago         1.42 GB
          cc1c942fc9549340d5ce3be06d919383a8061e32   latest              de6d4a254a9e        4 weeks ago         1.76 GB
          efead29d4ffb6d190a8c1bf6dc7766ea3602a254   latest              de6d4a254a9e        4 weeks ago         1.76 GB
          420ccccaddd961b589a120c469c12fc5b05efa05   latest              de6d4a254a9e        4 weeks ago         1.76 GB
          477f7235e5c153a6fcc264835796b1325cb0f385   latest              de6d4a254a9e        4 weeks ago         1.76 GB
          96e98347f4b9891e7f31b15a28e1ac4c92745fdd   latest              04cb6a316283        5 weeks ago         1.14 GB
          docker.io/opensuse/leap                    15.0                fc31f6f2561c        5 weeks ago         102 MB
          465f4e0140a0bcb68c294b5803310684ea138248   latest              345377cec85c        5 weeks ago         1.29 GB
          39ca2b5b6025114172753539f2761622c4712258   latest              55e801d57ebb        5 weeks ago         1.29 GB
          435a6237a036627061bebf02de62396897fa04df   latest              a107ca16dda1        5 weeks ago         1.29 GB
          138305b687cc264df7f5d3576aa16ce9f3e6b86b   latest              828a27a31e21        5 weeks ago         1.46 GB
          3cd3e822a10c1cc7b7afdec2ef92912a8c350bc8   latest              5655450675ef        5 weeks ago         1.26 GB
          8ee86f0a418d570a02bbfbcde2d16ecf54072a0a   latest              f80d8f997aaf        6 weeks ago         1.44 GB
          8a1a87d17dc87caaeb3c750f44279236c4c9dbf8   latest              4b865e338520        6 weeks ago         1.16 GB
          docker.io/opensuse/leap                    15                  1761e347bba2        6 weeks ago         102 MB
          029cc87b07cf6d3eb3ed31cd8399cf5a75f78d44   latest              89a545cf55bb        6 weeks ago         1.33 GB
          23e988910d11756b30b2351ab00b99d6395075b9   latest              89a545cf55bb        6 weeks ago         1.33 GB
          7f97ea83f8bd099fa94144a6909d4ed45f8c8355   latest              89a545cf55bb        6 weeks ago         1.33 GB
          81363c91d4914aeb42c5a97078cae6b00a48497c   latest              89a545cf55bb        6 weeks ago         1.33 GB
          b0aaaec144ae35f0ab8b2610b459e0983b55de7e   latest              89a545cf55bb        6 weeks ago         1.33 GB
          b3c4f64ba06b8d9891513a136443e88074b526ef   latest              89a545cf55bb        6 weeks ago         1.33 GB
          bc9fb02ac4faa8ce784b8355f421d209c7f978dc   latest              89a545cf55bb        6 weeks ago         1.33 GB
          8f7436b4c9798db556fdc0ccf5ae6faaff33b531   latest              14b28fbe0c52        6 weeks ago         1.31 GB
          b52ff0640a918ce41958cc3c65d5d355fc6b19bb   latest              b5f5d35aca30        7 weeks ago         1.57 GB
          bfa7b1bcb59d498ef077c89f48f804697d2c768e   latest              b5f5d35aca30        7 weeks ago         1.57 GB
          24ac056b36c25e7c0a4b5b2bdd8da078b1c4eeb2   latest              d4b5dcadc5e1        7 weeks ago         599 MB
          54a2bac59f4587f0ebe19467a421d34ef460247b   latest              89c532aee9cb        8 weeks ago         599 MB
          8b56c3889c0e8cda511e79c41999085d9fe2f5e3   latest              c92156f3e0f4        8 weeks ago         1.28 GB
          e90d43a8659a298b45c5be4f39419d6a968957a4   latest              e583267578e6        8 weeks ago         504 MB
          f823e53dc7ef6c79143ed8ba15526e8955526e4b   latest              0c01058cde89        2 months ago        1.77 GB
          174e89aaa76f9ef7d339a1d773a0c78c787edb18   latest              6b9f78351a0c        2 months ago        1.16 GB
          8bbaa11cef1bd4400c8a611a531f9a186881ab6f   latest              6b9f78351a0c        2 months ago        1.16 GB
          cb41d6e8740321105c263a19cd4987027635cfd2   latest              6b9f78351a0c        2 months ago        1.16 GB
          212be2a9f5b57c6e93ddee9f3717b3c51890336f   latest              469c19fad18a        2 months ago        1.54 GB
          c53f9b58bb5a01c88b0b8a2c9002319baf61912d   latest              469c19fad18a        2 months ago        1.54 GB
          800a16822794950dc9166d17bbbc201da85df936   latest              e0651c1932f6        2 months ago        1.41 GB
          81154e260723c3be387a3a98bbeca82dc1473b80   latest              e0651c1932f6        2 months ago        1.41 GB
          955c8572b32ad705e2c64e121f1c941d7dc00ed9   latest              e57001ef342a        2 months ago        1.28 GB
          cfc7fc6fad121bcc5f7416069e5e1925a9d393b0   latest              dd7a6150148a        2 months ago        1.28 GB
          1cf2793f4965db2b78357498add59f22b4880b3d   latest              5695e01b5287        2 months ago        1.56 GB
          d2dd26d50473ba728600376945c35fe487cb65eb   latest              5695e01b5287        2 months ago        1.56 GB
          0f4525a1f878ed062d072cd4de794ee5af8335ac   latest              5695e01b5287        2 months ago        1.56 GB
          08437018300ebe8355f26d98435e6dc28439e6aa   latest              18f2caa5927b        2 months ago        1.93 GB
          347ccd0d34eeb6d7a98c0b29c3b191ba5786ec03   latest              431dec22ebca        2 months ago        1.93 GB
          48c07934d34d5776e0b30534358d003daed89b9e   latest              c3c778ca15b8        2 months ago        1.73 GB
          4f726bbaa49d990f65e2b32fa28b336e803c85e0   latest              50e10678c29a        2 months ago        1.38 GB
          docker.io/ubuntu                           18.04               cd6d8154f1e1        4 months ago        84.1 MB
          docker.io/centos                           7                   5182e96772bf        5 months ago        200 MB
          docker.io/opensuse                         leap                35057ab4ef08        9 months ago        110 MB
          

          I don't know which of those images are the result of current {{Dockerfile}}s in my pipeline jobs and which are the result of previous iterations of {{Dockerfile}}s,

          Is there some way to determine this?

          How does the previously suggested docker-gc know which of the above are current and which are stale?  The comment says # Find images that are created at least GRACE_PERIOD_SECONDS ago but creation date is nowhere near an indicator of whether an image is current or not.  Some kind of last-used time would be, but Docker doesn't provide that.

          So lacking that functionality in Docker, I would submit that if Jenkins is creating the images and is naming them in an obfuscating way, Jenkins is responsible for cleaning up stale images.

          To that end, in the steps of a pipeline job that is running in a docker container, is the repository ID known?  It's displayed in the output of a pipeline job:

           + docker inspect -f . f880bb025268313fbd2fa186d75498129f92b622

          But is it available to, say, an sh command in a step of the job so that I can start to do my own tracking for proper garbage collection (which I would submit that docker-gc is not).

          Brian J Murrell added a comment - I would like to disagree at a higher level.  The obfuscatingly named (a hash of some sort) images are being created by Jenkins and because they are using such an obfuscating name as a hash makes trying to manage the accumulation of them outside of Jenkins difficult at best. For example, to go along with the idea that it is up to me, the admin (or a script I write) to remove old, no-longer used images, which represent past iterations of {{Dockerfile}}s in my pipeline jobs, which of the following images are current and which are older iterations? # docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE <none> <none> ff479182ebeb 4 days ago 1.42 GB a831c15b87dbd4231d6947a16cb3a9af816cbd8d latest ac3742854a96 4 days ago 1.42 GB e8cd3c8b78088b6c52ee705ae10be4201359a5b3 latest ac3742854a96 4 days ago 1.42 GB 4249d3c8084d55fc3d2a99132a0f9bce66f6e5ec latest ac3742854a96 4 days ago 1.42 GB 8447a9d1402c6562bdfc27950aada59b7b1e0593 latest ce155bde643b 4 days ago 1.62 GB a1a0b41dfa855a77be010b372402702114bc1225 latest ce155bde643b 4 days ago 1.62 GB a677138400e3274c0f3ccacb0d6919a892df06fb latest ce155bde643b 4 days ago 1.62 GB 280f646bcd85076af3f7cb58afc3c221a426ad4c latest 60e033d80253 4 days ago 1.76 GB a2ec752628e0decf2e09a39fabfd1e08afdbe67f latest 60e033d80253 4 days ago 1.76 GB cbc1131e470e384e815c278b456f8ce695657a89 latest 60e033d80253 4 days ago 1.76 GB 24e62ec336d1449f98fb7474a8496af0707c0588 latest caf0d9564161 7 days ago 1.42 GB c761e9f12da336e348736f8c32e83e53d0c41ae4 latest caf0d9564161 7 days ago 1.42 GB 0b0baa88ac6d8ca0e4d1ec58b8d8ee475ba0978b latest 1b923c4a3d19 7 days ago 1.45 GB 79f5fc427029a540383e43637806c27ff56af029 latest 9515ddc1f273 7 days ago 1.6 GB <none> <none> af1113a0c00c 7 days ago 1.6 GB <none> <none> 1622a837d8b6 3 weeks ago 1.62 GB 46807bbb74e92fbdbe56fced9c109b8ed0367036 latest 353bbbbb4f1b 3 weeks ago 1.62 GB d7c383939c4a517002774159d046d7f310f881c8 latest 353bbbbb4f1b 3 weeks ago 1.62 GB 4942b00624bbc67cb2ac9c1832200ce421962c2d latest 3c768e8f5305 3 weeks ago 1.58 GB 8b18a2c252deea8b6901d61877a13bd7567ba2f5 latest 40039dfa6469 3 weeks ago 1.6 GB 7b402774ab1d76f7d3a240f53780cee70a2da39c latest 78cd8f4f275e 4 weeks ago 1.55 GB d2c8168b9d4e27bd0426deb78972664c19e97e8b latest 0ae5d7965ca6 4 weeks ago 1.61 GB 56043f81efbf7289d621c112b82933220763fbb4 latest 340fddd2839f 4 weeks ago 1.45 GB 115282b91e727adbf38cdeb8cdf6c1ef98be6593 latest 0b600893851a 4 weeks ago 1.41 GB 3163e91161e120ac113c2f191742689178ae01ee latest e1d8225a6c03 4 weeks ago 1.44 GB 6d5d2a2d75e1825fd624ceddfabaabc3d2dcbeb1 latest e1d8225a6c03 4 weeks ago 1.44 GB 65fbd7397d29d5649e5ac73664aeab9eedde969b latest 2563f445defd 4 weeks ago 1.69 GB f880bb025268313fbd2fa186d75498129f92b622 latest 2563f445defd 4 weeks ago 1.69 GB e4858d566cbee1aa9779ab329d4582e27ee74e4f latest d474affa1aeb 4 weeks ago 1.28 GB 0120a36aba35ee597208a684226d44cb569f0d43 latest c76105e5b17a 4 weeks ago 1.28 GB dba890385ab8e5f278e432b53354dd96065e44b9 latest c76105e5b17a 4 weeks ago 1.28 GB b449ce5eb8b256896bddfb05bd7bfc5befaabf5c latest 6ef942624bea 4 weeks ago 1.28 GB 1b063af35c6d4599b83d38e8c334afc65418bedf latest 0e664a829edf 4 weeks ago 1.42 GB 3bb0bab0fc5d15716dc2f627b18e6f9abec9cfd1 latest 0e664a829edf 4 weeks ago 1.42 GB cc1c942fc9549340d5ce3be06d919383a8061e32 latest de6d4a254a9e 4 weeks ago 1.76 GB efead29d4ffb6d190a8c1bf6dc7766ea3602a254 latest de6d4a254a9e 4 weeks ago 1.76 GB 420ccccaddd961b589a120c469c12fc5b05efa05 latest de6d4a254a9e 4 weeks ago 1.76 GB 477f7235e5c153a6fcc264835796b1325cb0f385 latest de6d4a254a9e 4 weeks ago 1.76 GB 96e98347f4b9891e7f31b15a28e1ac4c92745fdd latest 04cb6a316283 5 weeks ago 1.14 GB docker.io/opensuse/leap 15.0 fc31f6f2561c 5 weeks ago 102 MB 465f4e0140a0bcb68c294b5803310684ea138248 latest 345377cec85c 5 weeks ago 1.29 GB 39ca2b5b6025114172753539f2761622c4712258 latest 55e801d57ebb 5 weeks ago 1.29 GB 435a6237a036627061bebf02de62396897fa04df latest a107ca16dda1 5 weeks ago 1.29 GB 138305b687cc264df7f5d3576aa16ce9f3e6b86b latest 828a27a31e21 5 weeks ago 1.46 GB 3cd3e822a10c1cc7b7afdec2ef92912a8c350bc8 latest 5655450675ef 5 weeks ago 1.26 GB 8ee86f0a418d570a02bbfbcde2d16ecf54072a0a latest f80d8f997aaf 6 weeks ago 1.44 GB 8a1a87d17dc87caaeb3c750f44279236c4c9dbf8 latest 4b865e338520 6 weeks ago 1.16 GB docker.io/opensuse/leap 15 1761e347bba2 6 weeks ago 102 MB 029cc87b07cf6d3eb3ed31cd8399cf5a75f78d44 latest 89a545cf55bb 6 weeks ago 1.33 GB 23e988910d11756b30b2351ab00b99d6395075b9 latest 89a545cf55bb 6 weeks ago 1.33 GB 7f97ea83f8bd099fa94144a6909d4ed45f8c8355 latest 89a545cf55bb 6 weeks ago 1.33 GB 81363c91d4914aeb42c5a97078cae6b00a48497c latest 89a545cf55bb 6 weeks ago 1.33 GB b0aaaec144ae35f0ab8b2610b459e0983b55de7e latest 89a545cf55bb 6 weeks ago 1.33 GB b3c4f64ba06b8d9891513a136443e88074b526ef latest 89a545cf55bb 6 weeks ago 1.33 GB bc9fb02ac4faa8ce784b8355f421d209c7f978dc latest 89a545cf55bb 6 weeks ago 1.33 GB 8f7436b4c9798db556fdc0ccf5ae6faaff33b531 latest 14b28fbe0c52 6 weeks ago 1.31 GB b52ff0640a918ce41958cc3c65d5d355fc6b19bb latest b5f5d35aca30 7 weeks ago 1.57 GB bfa7b1bcb59d498ef077c89f48f804697d2c768e latest b5f5d35aca30 7 weeks ago 1.57 GB 24ac056b36c25e7c0a4b5b2bdd8da078b1c4eeb2 latest d4b5dcadc5e1 7 weeks ago 599 MB 54a2bac59f4587f0ebe19467a421d34ef460247b latest 89c532aee9cb 8 weeks ago 599 MB 8b56c3889c0e8cda511e79c41999085d9fe2f5e3 latest c92156f3e0f4 8 weeks ago 1.28 GB e90d43a8659a298b45c5be4f39419d6a968957a4 latest e583267578e6 8 weeks ago 504 MB f823e53dc7ef6c79143ed8ba15526e8955526e4b latest 0c01058cde89 2 months ago 1.77 GB 174e89aaa76f9ef7d339a1d773a0c78c787edb18 latest 6b9f78351a0c 2 months ago 1.16 GB 8bbaa11cef1bd4400c8a611a531f9a186881ab6f latest 6b9f78351a0c 2 months ago 1.16 GB cb41d6e8740321105c263a19cd4987027635cfd2 latest 6b9f78351a0c 2 months ago 1.16 GB 212be2a9f5b57c6e93ddee9f3717b3c51890336f latest 469c19fad18a 2 months ago 1.54 GB c53f9b58bb5a01c88b0b8a2c9002319baf61912d latest 469c19fad18a 2 months ago 1.54 GB 800a16822794950dc9166d17bbbc201da85df936 latest e0651c1932f6 2 months ago 1.41 GB 81154e260723c3be387a3a98bbeca82dc1473b80 latest e0651c1932f6 2 months ago 1.41 GB 955c8572b32ad705e2c64e121f1c941d7dc00ed9 latest e57001ef342a 2 months ago 1.28 GB cfc7fc6fad121bcc5f7416069e5e1925a9d393b0 latest dd7a6150148a 2 months ago 1.28 GB 1cf2793f4965db2b78357498add59f22b4880b3d latest 5695e01b5287 2 months ago 1.56 GB d2dd26d50473ba728600376945c35fe487cb65eb latest 5695e01b5287 2 months ago 1.56 GB 0f4525a1f878ed062d072cd4de794ee5af8335ac latest 5695e01b5287 2 months ago 1.56 GB 08437018300ebe8355f26d98435e6dc28439e6aa latest 18f2caa5927b 2 months ago 1.93 GB 347ccd0d34eeb6d7a98c0b29c3b191ba5786ec03 latest 431dec22ebca 2 months ago 1.93 GB 48c07934d34d5776e0b30534358d003daed89b9e latest c3c778ca15b8 2 months ago 1.73 GB 4f726bbaa49d990f65e2b32fa28b336e803c85e0 latest 50e10678c29a 2 months ago 1.38 GB docker.io/ubuntu 18.04 cd6d8154f1e1 4 months ago 84.1 MB docker.io/centos 7 5182e96772bf 5 months ago 200 MB docker.io/opensuse leap 35057ab4ef08 9 months ago 110 MB I don't know which of those images are the result of current {{Dockerfile}}s in my pipeline jobs and which are the result of previous iterations of {{Dockerfile}}s, Is there some way to determine this? How does the previously suggested docker-gc know which of the above are current and which are stale?  The comment says # Find images that are created at least GRACE_PERIOD_SECONDS ago but creation date is nowhere near an indicator of whether an image is current or not.  Some kind of last-used time would be, but Docker doesn't provide that. So lacking that functionality in Docker, I would submit that if Jenkins is creating the images and is naming them in an obfuscating way, Jenkins is responsible for cleaning up stale images. To that end, in the steps of a pipeline job that is running in a docker container, is the repository ID known?  It's displayed in the output of a pipeline job: + docker inspect -f . f880bb025268313fbd2fa186d75498129f92b622 But is it available to, say, an sh command in a step of the job so that I can start to do my own tracking for proper garbage collection (which I would submit that docker-gc is not).

          Attila Szeremi added a comment - - edited

          I disagree with brushing this ticket off to use `docker-gc` instead.

          First of all, it makes no sense to use `docker-gc` in the `post` section of the declarative pipeline, because `docker-gc` only removes images that are older than (by default) 1 hour old. The image we'd want to clean is very fresh though.

          Secondly, `docker-gc` (to me) undesirably removes all other images as well; ones I might still be using. If I have a cron job set to run a Docker image every 2 hours, and `docker-gc` would keep removing the image, then I would need to have the re-download the image from the registry each time, wasting time and bandwidth.

          It would be great if either the generated image tag would automatically be removed on build finish, or if the image tag were predictable so that I could clean up manually in the end. Or simply just making the image tag available so I could read it.

          Currently here is how I clean up my built Docker image. I am forced to use imperative scripts to build the Docker image to get the image tag so that I could clean up in the end.

          EDIT: I have fixed the links in the above paragraph.

          Attila Szeremi added a comment - - edited I disagree with brushing this ticket off to use `docker-gc` instead. First of all, it makes no sense to use `docker-gc` in the `post` section of the declarative pipeline, because `docker-gc` only removes images that are older than (by default) 1 hour old. The image we'd want to clean is very fresh though. Secondly, `docker-gc` (to me) undesirably removes all other images as well; ones I might still be using. If I have a cron job set to run a Docker image every 2 hours, and `docker-gc` would keep removing the image, then I would need to have the re-download the image from the registry each time, wasting time and bandwidth. It would be great if either the generated image tag would automatically be removed on build finish, or if the image tag were predictable so that I could clean up manually in the end. Or simply just making the image tag available so I could read it. Currently  here is how I clean up my built Docker image. I am forced to use  imperative scripts to build the Docker image to get the image tag so that I could clean up in the end. EDIT: I have fixed the links in the above paragraph.

          Allan Lewis added a comment -

          My current workaround for this is to add -tag some-tag-i-understand to additionalBuildArgs. I then have a cleanup script run by cron on each node that deletes all of Jenkins's random-tagged images. If any of these have another tag due to -tag, this will just drop the random tag; otherwise it will delete the image.
          I think deleting by date is an antipattern anyway since images built a long time ago aren't necessarily unused and images built recently aren't necessarily worth keeping.

          Allan Lewis added a comment - My current workaround for this is to add - tag some-tag-i-understand to additionalBuildArgs . I then have a cleanup script run by cron on each node that deletes all of Jenkins's random-tagged images. If any of these have another tag due to -tag , this will just drop the random tag; otherwise it will delete the image. I think deleting by date is an antipattern anyway since images built a long time ago aren't necessarily unused and images built recently aren't necessarily worth keeping.

          Allan Lewis added a comment -

          My call here is that Docker image management is out of scope for Declarative to deal with.

          I agree, but Jenkins shouldn't make it unnecessarily difficult. If Docker images were tagged with some combination of the job name and other metadata instead of just a hash, that would help.

          Allan Lewis added a comment - My call here is that Docker image management is out of scope for Declarative to deal with. I agree, but Jenkins shouldn't make it unnecessarily difficult. If Docker images were tagged with some combination of the job name and other metadata instead of just a hash, that would help.

          Brian J Murrell added a comment - - edited

          allanlewis_youview What's an example of some-tag-i-understand?  Is it the same for every run of a given job, or is there some kind of serialiser in there so that you know which is the latest, etc.?

          Doesn't deleting all of the Jenkins produced hashy type tags mean that Jenkins won't be able to find an image to re-use for a future run of a job where the Dockerfile has not changed?

          Brian J Murrell added a comment - - edited allanlewis_youview What's an example of some-tag-i-understand ?  Is it the same for every run of a given job, or is there some kind of serialiser in there so that you know which is the latest, etc.? Doesn't deleting all of the Jenkins produced hashy type tags mean that Jenkins won't be able to find an image to re-use for a future run of a job where the Dockerfile has not changed?

          Allan Lewis added a comment -

          Hi brianjmurrell - for multi-branch pipelines we're currently using JOB_NAME (plus implied :latest). That has the effect of retaining the image from the tip of every branch since JOB_NAME is <repo>/<branch> in the multi-branch case. We don't yet have a strategy for deleting images from branches that no longer exist on the remote, but that shouldn't be difficult to script.
          For some non-multibranch pipelines we have, that can be triggered with an arbitrary Git ref via a parameter, we're using "${JOB_BASE_NAME}/${params.GIT_REF}".toLowerCase(). (Docker image names have to be lower-case.)

          Allan Lewis added a comment - Hi brianjmurrell - for multi-branch pipelines we're currently using JOB_NAME (plus implied :latest ). That has the effect of retaining the image from the tip of every branch since JOB_NAME is <repo>/<branch> in the multi-branch case. We don't yet have a strategy for deleting images from branches that no longer exist on the remote, but that shouldn't be difficult to script. For some non-multibranch pipelines we have, that can be triggered with an arbitrary Git ref via a parameter, we're using "${JOB_BASE_NAME}/${params.GIT_REF}".toLowerCase() . (Docker image names have to be lower-case.)

          Hrm.  So for a GitHub Organisation PR, the $JOB_NAME will be github-org/project/PR-XX.

          First of all, that's an invalid tag name.  But let's assume we can make it sane by removing the {{/}}s.

          But more importantly, when that PR lands to master, the tag on the docker image will still be for PR-XX even though it will actually now be the new current image being used for master (I think).  But it's not tagged for master on that project, it's tagged for PR-XX on that project.

          Granted when (the locally written script for) garbage cleanup removes that PR-XX image because PR-XX is closed, the master branch build will build a new (identical) docker image to replace the just removed one.  But that's an extra build that shouldn't be necessary.

          That said, it's probably still better than simply using the "oldest built" garbage collection mechanism that is being suggested in this ticket.

          Also, allanlewis_youview, you didn't respond to my other concern:

          Doesn't deleting all of the Jenkins produced hashy type tags mean that Jenkins won't be able to find an image to re-use for a future run of a job where the Dockerfile has not changed?

           

          Brian J Murrell added a comment - Hrm.  So for a GitHub Organisation PR, the $JOB_NAME will be github-org/project/PR-XX. First of all, that's an invalid tag name.  But let's assume we can make it sane by removing the {{/}}s. But more importantly, when that PR lands to master, the tag on the docker image will still be for PR-XX  even though it will actually now be the new current image being used for master (I think).  But it's not tagged for master on that project , it's tagged for PR-XX on that project. Granted when (the locally written script for) garbage cleanup removes that PR-XX image because PR-XX is closed, the master branch build will build a new (identical) docker image to replace the just removed one.  But that's an extra build that shouldn't be necessary. That said, it's probably still better than simply using the "oldest built" garbage collection mechanism that is being suggested in this ticket. Also, allanlewis_youview , you didn't respond to my other concern: Doesn't deleting all of the Jenkins produced hashy type tags mean that Jenkins won't be able to find an image to re-use for a future run of a job where the Dockerfile has not changed?  

          Attila Szeremi added a comment - - edited

          brianjmurrell
          Doesn't deleting all of the Jenkins produced hashy type tags mean that Jenkins won't be able to find an image to re-use for a future run of a job where the Dockerfile has not changed?

          First of all, Jenkins doesn't even do that. If a build would be rerun, it would just build the Docker image again. Secondly, it's not like Jenkins does any kind of caching for non-Docker agents (like caching the node_modules/ generated from an npm install). And besides, Docker already has decent caching built into it; any step in the Dockerfile where the involved files did not change since last time, Docker automatically just re-uses the cached layer; so no additional help from Jenkins is needed.

          Attila Szeremi added a comment - - edited brianjmurrell Doesn't deleting all of the Jenkins produced hashy type tags mean that Jenkins won't be able to find an image to re-use for a future run of a job where the Dockerfile has not changed? First of all, Jenkins doesn't even do that. If a build would be rerun, it would just build the Docker image again. Secondly, it's not like Jenkins does any kind of caching for non-Docker agents (like caching the node_modules/ generated from an npm install). And besides, Docker already has decent caching built into it; any step in the Dockerfile where the involved files did not change since last time, Docker automatically just re-uses the cached layer; so no additional help from Jenkins is needed.

          Allan Lewis added a comment -

          Hrm. So for a GitHub Organisation PR, the $JOB_NAME will be github-org/project/PR-XX.

          I'm not using a GitHub org, so I'm not sure about that. My case is using a manually-configured Git URL.

          First of all, that's an invalid tag name. But let's assume we can make it sane by removing the {{/}}s.

          I'm not using the job name as a tag, I'm using it as the image name, and image names can contain slashes.

          Granted when (the locally written script for) garbage cleanup removes that PR-XX image because PR-XX is closed, the master branch build will build a new (identical) docker image to replace the just removed one. But that's an extra build that shouldn't be necessary.

          Doesn't deleting all of the Jenkins produced hashy type tags mean that Jenkins won't be able to find an image to re-use for a future run of a job where the Dockerfile has not changed?

          No, because when master is built, it will build the image again - cached if it's on the same node or if one implements push-pull with a registry - and tag it as <repo>/master. If we then prune the tag from the branch, we'll still have the image as it will be tagged for master.

          I'm not saying my solution will work for everyone, I just posted it in case it was useful for others.

          Allan Lewis added a comment - Hrm. So for a GitHub Organisation PR, the $JOB_NAME will be github-org/project/PR-XX. I'm not using a GitHub org, so I'm not sure about that. My case is using a manually-configured Git URL. First of all, that's an invalid tag name. But let's assume we can make it sane by removing the {{/}}s. I'm not using the job name as a tag, I'm using it as the image name, and image names can contain slashes. Granted when (the locally written script for) garbage cleanup removes that PR-XX image because PR-XX is closed, the master branch build will build a new (identical) docker image to replace the just removed one. But that's an extra build that shouldn't be necessary. Doesn't deleting all of the Jenkins produced hashy type tags mean that Jenkins won't be able to find an image to re-use for a future run of a job where the Dockerfile has not changed? No, because when master is built, it will build the image again - cached if it's on the same node or if one implements push-pull with a registry - and tag it as <repo>/master . If we then prune the tag from the branch, we'll still have the image as it will be tagged for master. I'm not saying my solution will work for everyone, I just posted it in case it was useful for others.

          Dwight Guth added a comment -

          I definitely side with the people who think that this was resolved prematurely. We have been running docker image prune on our Jenkins node that we recently migrated to dockerfiles, and each time it ran it was freeing up less and less space. What we eventually realized is that because Jenkins is tagging every build with a tag corresponding to the hash of the dockerfile, when the dockerfile changes, it does not untag the old build (because the new tag is different from the old tag), so the old build is not marked as dangling, and you have to run the significantly more aggressive `docker image prune -a` to clean it up. I believe Jenkins should be tagging these builds with something that is deterministic across builds to the same job and branch, and deleting all the tags associated with change requests when the change request closes and the jenkins branch is deleted.

          Dwight Guth added a comment - I definitely side with the people who think that this was resolved prematurely. We have been running docker image prune on our Jenkins node that we recently migrated to dockerfiles, and each time it ran it was freeing up less and less space. What we eventually realized is that because Jenkins is tagging every build with a tag corresponding to the hash of the dockerfile, when the dockerfile changes, it does not untag the old build (because the new tag is different from the old tag), so the old build is not marked as dangling, and you have to run the significantly more aggressive `docker image prune -a` to clean it up. I believe Jenkins should be tagging these builds with something that is deterministic across builds to the same job and branch, and deleting all the tags associated with change requests when the change request closes and the jenkins branch is deleted.

          Michael Harris added a comment - - edited

          Having some mechanism to delete the generated image is really important. If we have a lot of builds in a short time our nodes still run out of disk space before the scheduled job runs and wipes out old images. Jenkins is creating the images, and at a minimum needs to expose a way to clean up the image. Even if it was only added in the scripting syntax that would be sufficient.

          Michael Harris added a comment - - edited Having some mechanism to delete the generated image is really important. If we have a lot of builds in a short time our nodes still run out of disk space before the scheduled job runs and wipes out old images. Jenkins is creating the images, and at a minimum needs to expose a way to clean up the image. Even if it was only added in the scripting syntax that would be sufficient.

          Liam Newman added a comment -

          Bulk closing resolved issues.

          Liam Newman added a comment - Bulk closing resolved issues.

          Don Schiewer added a comment -

          This is still an issue and there is no good solution offered.

          Don Schiewer added a comment - This is still an issue and there is no good solution offered.

          Koen Dierckx added a comment -

          This is the shortest solution I could find, without reverting to the scripted workflow. Would be nice if this could be done automatically ?

           

          agent {
            dockerfile {
              filename 'test.Dockerfile'
              additionalBuildArgs "-t jenkins-test-build:${env.BUILD_NUMBER}"
            }
          }
          post {
            always {
              echo 'Cleaning up'
              sh 'docker rmi --force $(docker images --quiet --filter=reference="jenkins-test-build")' /* clean up dockerfile images*/
            }
          }
          

           

          Koen Dierckx added a comment - This is the shortest solution I could find, without reverting to the scripted workflow. Would be nice if this could be done automatically ?   agent {   dockerfile {     filename 'test.Dockerfile'     additionalBuildArgs "-t jenkins-test-build:${env.BUILD_NUMBER}"   } } post {   always {     echo 'Cleaning up'     sh 'docker rmi --force $(docker images --quiet --filter=reference= "jenkins-test-build" )' /* clean up dockerfile images*/   } }  

          I've also been bitten by this. 

          Currently, I'm solving it labeling & tagging my images in a predictable way, including project & branch name, and then, at post time:

          • generate the list of legal image tags for the project (one for each branch)
          • get the list of of all existing images for the project
          • subtract first list from the second
          • delete those images

          That deletes the jenkins-generated tag for the current build (but not my predictable tag), and also takes care of deleting images belonging to deleted branches.

          I also do the same for named volumes, by always creating them with a common prefix including project and branch name.

          Implementation goes along these lines:

          // Add label in Dockerfile, to be able to
          // easily identify jenkins-generated tags to belong to this project
          LABEL project=PROJECT_NAME
          
          // tag generated image predictably, including project & branch
                agent {
                  dockerfile {
                    additionalBuildArgs "--tag PROJECT_NAME/${env.BRANCH_NAME}:latest"
                  }
                  reuseNode true
                }
          
          // do cleanup at the end of every build
            post {
              always {
                dockerCleanup()
              }
            }
          
          // aux
          
          def dockerCleanup() {
            echo 'Cleaning stale docker images'
            sh oneline('''
              docker image ls -f
                label=project=PROJECT_NAME
              | tail -n +2
              | cut -d ' ' -f 1
              | egrep -v "$(
                  git branch -r
                  | sed -r 's# +origin/##'
                  | xargs -I{} -n 1 echo -n "PROJECT_NAME/{}|"
                  | sed 's/|$//'
                )"
              | xargs --no-run-if-empty docker image remove -f
            ''')
          
            echo 'Cleaning stale docker volumes'
            sh oneline('''
              docker volume ls -q
              | grep '^hyperscan-native'
              | egrep -v "$(
                  git branch -r
                  | sed -r 's# +origin/##'
                  | xargs -I{} -n 1 echo -n
                  "PROJECT_NAME_{}_VOLUME_NAME|"
                  | sed 's/|$//'
                )"
              | xargs --no-run-if-empty docker volume remove -f
            ''')
          }
          
          def oneline(obj) {
            obj.toString().replace("\n", " ").replaceAll("\\s+", " ")
          }
          
          

           

          That is working ok for me. But it's a stretch. I agree that if it's Jenkins the one tagging the images in an unpredictable way, it should be Jenkins to delete them when stale.

          Eliseo Martínez added a comment - I've also been bitten by this.  Currently, I'm solving it labeling & tagging my images in a predictable way, including project & branch name, and then, at post time: generate the list of legal image tags for the project (one for each branch) get the list of of all existing images for the project subtract first list from the second delete those images That deletes the jenkins-generated tag for the current build (but not my predictable tag), and also takes care of deleting images belonging to deleted branches. I also do the same for named volumes, by always creating them with a common prefix including project and branch name. Implementation goes along these lines: // Add label in Dockerfile, to be able to // easily identify jenkins-generated tags to belong to this project LABEL project=PROJECT_NAME // tag generated image predictably, including project & branch agent { dockerfile { additionalBuildArgs "--tag PROJECT_NAME/${env.BRANCH_NAME}:latest" } reuseNode true } // do cleanup at the end of every build post { always { dockerCleanup() } } // aux def dockerCleanup() { echo 'Cleaning stale docker images' sh oneline(''' docker image ls -f label=project=PROJECT_NAME | tail -n +2 | cut -d ' ' -f 1 | egrep -v "$( git branch -r | sed -r 's# +origin/##' | xargs -I{} -n 1 echo -n "PROJECT_NAME/{}|" | sed 's/|$ //' )" | xargs --no-run- if -empty docker image remove -f ''') echo 'Cleaning stale docker volumes' sh oneline(''' docker volume ls -q | grep '^hyperscan- native ' | egrep -v "$( git branch -r | sed -r 's# +origin/##' | xargs -I{} -n 1 echo -n "PROJECT_NAME_{}_VOLUME_NAME|" | sed 's/|$ //' )" | xargs --no-run- if -empty docker volume remove -f ''') } def oneline(obj) { obj.toString().replace( "\n" , " " ).replaceAll( "\\s+" , " " ) }   That is working ok for me. But it's a stretch. I agree that if it's Jenkins the one tagging the images in an unpredictable way, it should be Jenkins to delete them when stale.

          A proper solution would also for jenkins to use DIND (docker-in-docker) before creating the docker agent image so that the whole docker environment use for the job gets deleted afterwards, be it sub-image for the agent or others created by the job.  Or at least provide this as an option.

          Yannick Koehler added a comment - A proper solution would also for jenkins to use DIND (docker-in-docker) before creating the docker agent image so that the whole docker environment use for the job gets deleted afterwards, be it sub-image for the agent or others created by the job.  Or at least provide this as an option.

            Unassigned Unassigned
            gllewellyn Gavin Llewellyn
            Votes:
            5 Vote for this issue
            Watchers:
            23 Start watching this issue

              Created:
              Updated: