Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-41805

Pipeline Job-- deletedir() delete only current directory but @script and @tmp dir still there in workspace.

    • Jenkins 2.244

      Directories with an ampersand (like @tmp and @script, @libs) are not removed when using 'deletedir()' in pipeline stage.

      They are never cleaned up, even by the built in build discarders,
      On our instances we have 2172 folders with @libs (which is a copy of the shared library for the build)
      Which is taking up 7GB of space

          [JENKINS-41805] Pipeline Job-- deletedir() delete only current directory but @script and @tmp dir still there in workspace.

          Ryan Campbell added a comment -

          Please reopen with step by step instructions to reproduce the problem and clear description of what you expect to happen.

          Ryan Campbell added a comment - Please reopen with step by step instructions to reproduce the problem and clear description of what you expect to happen.

          Stefan Droog added a comment - - edited

          Multibranch pipeline (Jenkinsfile):

           

          stage('Checkout') {
          checkout scm
          }
          
          stage('Build') { 
          def resultDist = sh returnStdout: true, script: './gradlew build'
          println resultDist
          }
          
          stage('Cleanup') {
          deleteDir()
          }
          

          This will create some directories in the Jenkins workspace, e.g: 

          MyBranch-GWI555O7BRZQK2BI4NIP67CEYQUZYH36AKCBALPQ2TWX4CIGNYIZ
          MyBranch-GWI555O7BRZQK2BI4NIP67CEYQUZYH36AKCBALPQ2TWX4CIGNYIZ@tmp

          After the 'Cleanup' stage the directories with an ampersand are still there

          We expect that all directories are removed.

          Stefan Droog added a comment - - edited Multibranch pipeline (Jenkinsfile):   stage( 'Checkout' ) { checkout scm } stage( 'Build' ) { def resultDist = sh returnStdout: true , script: './gradlew build' println resultDist } stage( 'Cleanup' ) { deleteDir() } This will create some directories in the Jenkins workspace, e.g:  MyBranch-GWI555O7BRZQK2BI4NIP67CEYQUZYH36AKCBALPQ2TWX4CIGNYIZ MyBranch-GWI555O7BRZQK2BI4NIP67CEYQUZYH36AKCBALPQ2TWX4CIGNYIZ@tmp After the 'Cleanup' stage the directories with an ampersand are still there We expect that all directories are removed.

          Hiten Prajapati added a comment - - edited

          yes stefan1509 you are right.
          i am still facing this issue. 
          currently i have to delete that directory manual.

          i faced this when my storage is fully occupied, i have many branches so each branch taking lots spaces after completed build. 

          Hiten Prajapati added a comment - - edited yes stefan1509 you are right. i am still facing this issue.  currently i have to delete that directory manual. i faced this when my storage is fully occupied, i have many branches so each branch taking lots spaces after completed build. 

          I fail to understand how this issue can be set to "Important" and that importance not be explained in details. At first sight it seems like one of Minor or less priority. Not a bug by the way, more an improvement.

          So given this, I would probably not hold my breath that this is fixed anytime soon if people thinking this should be configurable or so contribute this fix.

          Baptiste Mathus added a comment - I fail to understand how this issue can be set to "Important" and that importance not be explained in details. At first sight it seems like one of Minor or less priority. Not a bug by the way, more an improvement. So given this, I would probably not hold my breath that this is fixed anytime soon if people thinking this should be configurable or so contribute this fix.

          Alexandre Feblot added a comment - - edited

          In my case, I'm doing some dockerized stuffs with admin privileges in a specific subdir created for for this purpose, which is supposed to be removed afterwards. so, these additional temp dirs are created as root. Once outside of the container, the next time jenkins starts the job, it obviously fails to delete this root owned directory, so I have to take care of it "manually" from within the container. OK, our docker should maybe be set differently so that a container root user is not mapped to the global root user, but still, a deleteDir() function which does not delete all leftovers is not just a minor annoyance.

          Alexandre Feblot added a comment - - edited In my case, I'm doing some dockerized stuffs with admin privileges in a specific subdir created for for this purpose, which is supposed to be removed afterwards. so, these additional temp dirs are created as root. Once outside of the container, the next time jenkins starts the job, it obviously fails to delete this root owned directory, so I have to take care of it "manually" from within the container. OK, our docker should maybe be set differently so that a container root user is not mapped to the global root user, but still, a deleteDir() function which does not delete all leftovers is not just a minor annoyance.

          Sarah LeGault added a comment -

          In our case, we are using multibranch pipeline jobs. We also see this issue occurring. This results in our drive filling up with folders that don't get cleaned up properly. Jenkins is running on a VM in Azure in our environment.

          Sarah LeGault added a comment - In our case, we are using multibranch pipeline jobs. We also see this issue occurring. This results in our drive filling up with folders that don't get cleaned up properly. Jenkins is running on a VM in Azure in our environment.

          Oleg Korsak added a comment -

          Our Jenkins runs out of space once per day while having 50GB of space. Very intensive building occurs. Please fix cleanup

          Oleg Korsak added a comment - Our Jenkins runs out of space once per day while having 50GB of space. Very intensive building occurs. Please fix cleanup

          i face this issue as well. @libs and @scripts directories contains too much files, so jenkins master runs out of inodes and must be cleaned manually

          Alexander Vasiliev added a comment - i face this issue as well. @libs and @scripts directories contains too much files, so jenkins master runs out of inodes and must be cleaned manually

          hiten_prajapati, I used the Work Space Cleanup plugin, try this:

          ws(pwd() + "@tmp") {
            step([$class: 'WsCleanup'])
          }

          Ernesto Moyano added a comment - hiten_prajapati , I used the Work Space Cleanup plugin, try this: ws(pwd() + "@tmp" ) {   step([$class: 'WsCleanup' ]) }

          Experiencing this as well on our master node.  Running on Azure, would rather not have to pay for infinite HDD space

          Alexander Trauzzi added a comment - Experiencing this as well on our master node.  Running on Azure, would rather not have to pay for infinite HDD space

          We're experiencing this as well, but while using `cleanWs()`. Also, our file / folder ownerships are simply `jenkins:jenkins` so no problem there as mentioned by alexf. The biggest problem about this bug is that it can create a massive overhead of inodes in some cases. We constantly experience `No space left on device` errors. We're now about to create a cronjob to periodically remove the @script / @tmp folders. I'd love to hear better workarounds. 

          Niels van Aken added a comment - We're experiencing this as well, but while using `cleanWs()`. Also, our file / folder ownerships are simply `jenkins:jenkins` so no problem there as mentioned by alexf . The biggest problem about this bug is that it can create a massive overhead of inodes in some cases. We constantly experience `No space left on device` errors. We're now about to create a cronjob to periodically remove the @script / @tmp folders. I'd love to hear better workarounds. 

          I'd like to add, we've just added the following workaround, which seems to work pretty nicely for now.

          post {
            always {
              cleanWs()
              dir("${env.WORKSPACE}@tmp") {
                deleteDir()
              }
              dir("${env.WORKSPACE}@script") {
                deleteDir()
              }
              dir("${env.WORKSPACE}@script@tmp") {
                deleteDir()
              }
            }
          }
          

          Niels van Aken added a comment - I'd like to add, we've just added the following workaround, which seems to work pretty nicely for now. post { always {     cleanWs() dir( "${env.WORKSPACE}@tmp" ) { deleteDir() } dir( "${env.WORKSPACE}@script" ) { deleteDir() } dir( "${env.WORKSPACE}@script@tmp" ) { deleteDir() } } }

          Alexander Trauzzi added a comment - - edited

          Is there any way to ensure the directories with this format on master get deleted as well?

          /var/lib/jenkins/workspace/application_PR-2375-5BLTOX3EZC6OGXGE7L2TFKXEY6Y3FYMDZ2VZ37EORCWPFIFMUKHA@script
          

          These directories are created when Jenkins goes to read and validate the Jenkinsfile initially. No further work appears to be done. These directories continue to accumulate until my master machine runs out of disk space. A bit of a nuisance.

          Alexander Trauzzi added a comment - - edited Is there any way to ensure the directories with this format on master get deleted as well? / var /lib/jenkins/workspace/application_PR-2375-5BLTOX3EZC6OGXGE7L2TFKXEY6Y3FYMDZ2VZ37EORCWPFIFMUKHA@script These directories are created when Jenkins goes to read and validate the Jenkinsfile initially. No further work appears to be done. These directories continue to accumulate until my master machine runs out of disk space. A bit of a nuisance.

          I'm not entirely sure what you mean? These workspace `@script` folders should be deleted with the snippet I posted.

          Niels van Aken added a comment - I'm not entirely sure what you mean? These workspace `@script` folders should be deleted with the snippet I posted.

          Alexander Trauzzi added a comment - - edited

          I suspect that only does it on the agent, not on the master. The path on my Jenkins master where files are accumulating is only referenced at the start of a build, and never again throughout.

          Alexander Trauzzi added a comment - - edited I suspect that only does it on the agent , not on the master . The path on my Jenkins master where files are accumulating is only referenced at the start of a build, and never again throughout.

          Ah I see, we only build on the same machine, I'm not sure how a master node would work. I'm not sure how to delete the dirs there. 

          Niels van Aken added a comment - Ah I see, we only build on the same machine, I'm not sure how a master node would work. I'm not sure how to delete the dirs there. 

          please Help me this is still problem me cos we have many jobs with many branches in project that's make my ssd full. 

          Hiten Prajapati added a comment - please Help me this is still problem me cos we have many jobs with many branches in project that's make my ssd full. 

          Henry Yei added a comment -

          I run a simple cleanup bash script inside a job every once in awhile on my nodes.

          #!/usr/bin/env bash

          1. get to main job directory
            cd ..
            ls -al
            cd ..
            ls -al
          2. delete all @tmp files
            find . | grep @tmp$ | xargs -n1 rm -fr

          Of course you need to make sure you don't have any artifacts that you might match.

           

          Henry Yei added a comment - I run a simple cleanup bash script inside a job every once in awhile on my nodes. #!/usr/bin/env bash get to main job directory cd .. ls -al cd .. ls -al delete all @tmp files find . | grep @tmp$ | xargs -n1 rm -fr Of course you need to make sure you don't have any artifacts that you might match.  

          Eric Nelson added a comment -

          The standard cleanWs() leaves at least the tmp directories behind on the agents. This is a real big problem for people using secret files , as it leaves them behind and they could be discoverable from another job! 

          I'm going to try the workaround posted by vanaken

          For what its worth jomega I'm not seeing the orphaned @scipt directories on my jenkins master as of version 2.108 for my multibranch pipeline jobs. 

          Eric Nelson added a comment - The standard cleanWs() leaves at least the tmp directories behind on the agents. This is a real big problem for people using secret files , as it leaves them behind and they could be discoverable from another job!  I'm going to try the workaround posted by vanaken .  For what its worth jomega I'm not seeing the orphaned @scipt directories on my jenkins master as of version 2.108 for my multibranch pipeline jobs. 

          Edgars Batna added a comment - - edited

          Not sure why these directories would be required. There are system-wide temporary directories that should be used for this. The temporary directories, no matter what implementation detail, should not concern us users.

          Edgars Batna added a comment - - edited Not sure why these directories would be required. There are system-wide temporary directories that should be used for this. The temporary directories, no matter what implementation detail, should not concern us users.

          Cesar Ibarra added a comment -

          Im having the same issue, but also a folder named WORKSPACE_cleanup is created.

          Cesar Ibarra added a comment - Im having the same issue, but also a folder named WORKSPACE_cleanup is created.

          Steve Magness added a comment - - edited

          Has anyone solved this for the 'master - slave' setup? Our master is rapidly running out of disk space due to mercurial performing a full checkout into @script just to get the jenkinsfile (see https://issues.jenkins-ci.org/browse/JENKINS-50490). I'd like a solution that can be implemented within a scripted jenkinsfile eg:

          node ('slave') {
              // do useful build things first

              cleanWs() // clean up workspace on slave
          }
          cleanWs // clean workspace(s) on master (eg @script @libs directories)

          Steve Magness added a comment - - edited Has anyone solved this for the 'master - slave' setup? Our master is rapidly running out of disk space due to mercurial performing a full checkout into @script just to get the jenkinsfile (see https://issues.jenkins-ci.org/browse/JENKINS-50490 ). I'd like a solution that can be implemented within a scripted jenkinsfile eg: node ('slave') {     // do useful build things first     cleanWs() // clean up workspace on slave } cleanWs // clean workspace(s) on master (eg @script @libs directories)

          Steve Magness added a comment -

          To answer my own question, similar to vanaken solution but running on 'master'

          node ('slave') {
              // do useful build things first
              cleanWs() // clean up workspace on slave
          }
          node ('master') {
              dir("${env.WORKSPACE}@libs") {
                  deleteDir()
              }
              dir("${env.WORKSPACE}@script") {
                 deleteDir()
              }
          }

          Although this only cleans the directories when the stages on the slave succeed. You can use try..catch to catch exceptions from the slave stages and perform the cleanup in a finally block if required.

          Steve Magness added a comment - To answer my own question, similar to vanaken solution but running on 'master' node ('slave') {     // do useful build things first     cleanWs() // clean up workspace on slave } node ('master') {     dir("${env.WORKSPACE}@libs") {         deleteDir()     }     dir("${env.WORKSPACE}@script") {        deleteDir()     } } Although this only cleans the directories when the stages on the slave succeed. You can use try..catch to catch exceptions from the slave stages and perform the cleanup in a finally block if required.

          Sergii Kholod added a comment - - edited

          Together with "external workspace" plugin:

          node('master'){
          def extWs = exwsAllocate diskPoolId: 'p2000'
           exws(extWs) {}}
            sh "env | sort >test.txt"
            sleep time: 5, unit: 'MINUTES'
            cleanWs cleanWhenNotBuilt: false, notFailBuild: true
          }
           cleanWs cleanWhenNotBuilt: false, notFailBuild: true
          }

          the result is even worse:

          $ls -la /p2000/test-extws/
          total 0
          drwxr-xr-x 7 jenkins jenkins 62 Jun 25 13:42 .
          drwxr-xr-x 3 jenkins jenkins 23 Jun 25 11:38 ..
          drwxr-xr-x 2 jenkins jenkins 21 Jun 25 11:38 3
          drwxr-xr-x 2 jenkins jenkins 6 Jun 25 11:38 3@tmp
          drwxr-xr-x 2 jenkins jenkins 6 Jun 25 11:45 4@tmp
          drwxr-xr-x 2 jenkins jenkins 6 Jun 25 11:53 5@tmp
          drwxr-xr-x 2 jenkins jenkins 6 Jun 25 13:37 6@tmp

          Temp directories are flooding the workspace parent folder.

           

          Sergii Kholod added a comment - - edited Together with "external workspace" plugin: node('master'){ def extWs = exwsAllocate diskPoolId: 'p2000'  exws(extWs) {}}   sh "env | sort >test.txt"   sleep time: 5, unit: 'MINUTES'   cleanWs cleanWhenNotBuilt: false, notFailBuild: true }  cleanWs cleanWhenNotBuilt: false, notFailBuild: true } the result is even worse: $ls -la /p2000/test-extws/ total 0 drwxr-xr-x 7 jenkins jenkins 62 Jun 25 13:42 . drwxr-xr-x 3 jenkins jenkins 23 Jun 25 11:38 .. drwxr-xr-x 2 jenkins jenkins 21 Jun 25 11:38 3 drwxr-xr-x 2 jenkins jenkins 6 Jun 25 11:38 3@tmp drwxr-xr-x 2 jenkins jenkins 6 Jun 25 11:45 4@tmp drwxr-xr-x 2 jenkins jenkins 6 Jun 25 11:53 5@tmp drwxr-xr-x 2 jenkins jenkins 6 Jun 25 13:37 6@tmp Temp directories are flooding the workspace parent folder.  

          J S added a comment - - edited

          Hello Guys,

          i have the following Jenkinsfile :

           

          pipeline {
              options { disableConcurrentBuilds() }
              agent { label 'vhost01' }    
          stages {
          [..AllStages..]
              }    post {
                  always {
                      cleanWs()
                  }
              }
          }
          

          I use multibranch pipeline and have the problem that the folder under my node "vhost01" are not deleted. Can anyone help ?

           

           

          J S added a comment - - edited Hello Guys, i have the following Jenkinsfile :   pipeline { options { disableConcurrentBuilds() } agent { label 'vhost01' } stages { [..AllStages..] } post { always { cleanWs() } } } I use multibranch pipeline and have the problem that the folder under my node "vhost01" are not deleted. Can anyone help ?    

          Idan Adar added a comment -

          +1

          Our agents remain unclean because of this.

          Idan Adar added a comment - +1 Our agents remain unclean because of this.

          +1 still a problem. The tmp directories do not add much value.

          Andrei Muresianu added a comment - +1 still a problem. The tmp directories do not add much value.

          Karl Parry added a comment - - edited

          +1 we now have about 200-300 @tmp/@script folders being created every day across several slave servers.

          -EDIITED-

          Will add the script snippet provided above for current jobs for now

          Karl Parry added a comment - - edited +1 we now have about 200-300 @tmp/@script folders being created every day across several slave servers. - EDIITED - Will add the  script snippet provided above for current jobs for now

          jlpinardon added a comment -

          +1 I have added in a post always block a set of folderDelete operation... As far as I have only one slave, it is sustainable, but it will become ugly when using a label referencing several slaves.

          jlpinardon added a comment - +1 I have added in a post always block a set of folderDelete operation... As far as I have only one slave, it is sustainable, but it will become ugly when using a label referencing several slaves.

          Alexander Samoylov added a comment - - edited

          batmat wrote: "Not a bug by the way, more an improvement."
          I strongly disagree. Jenkins creates the @tmp automatically and stores there temporary files. Therefore it should be removed also automatically by Jenkins.
          Each tool that produces temporary data should be responsible for its removal. It is easy as ABC.
          Following your logic, memory leaks are also "not a bugs"...

          +1 for the fix (which must be trivial)

          Update: I confirm that the workaround dir(<dir> + '@tmp')

          { deleteDir() }

          is working. Luckily it does not create the nested @tmp@tmp. Thank you, vanaken.

          Alexander Samoylov added a comment - - edited batmat wrote: "Not a bug by the way, more an improvement." I strongly disagree. Jenkins creates the @tmp automatically and stores there temporary files. Therefore it should be removed also automatically by Jenkins. Each tool that produces temporary data should be responsible for its removal. It is easy as ABC. Following your logic, memory leaks are also "not a bugs"... +1 for the fix (which must be trivial) Update: I confirm that the workaround dir(<dir> + '@tmp') { deleteDir() } is working. Luckily it does not create the nested @tmp@tmp. Thank you, vanaken .

          Seeing the workarounds, I am wondering how wise it is to delete the pipeline helper folder(s) while the pipeline is still running.

          Oliver Gondža added a comment - Seeing the workarounds, I am wondering how wise it is to delete the pipeline helper folder(s) while the pipeline is still running.

          Why my jenkins create tmp directories in the master workspace for Jenkins Shared library instead of using agent workspace?

          Szczepan Zaskalski added a comment - Why my jenkins create tmp directories in the master workspace for Jenkins Shared library instead of using agent workspace ?

          szczepix: That directory is on the MASTER to allow you to do Replay operations.  It contains the content of any library that was loaded at run time. Without a copy, Replay can't work reliably.

          olivergondza: Indeed, deleting scripts that may well be executing is unsafe.

          However in general deleteDir(), IMHO should remove any copy on a slave (provided the slave isn't actually the master)

          Steven Christenson added a comment - szczepix : That directory is on the MASTER to allow you to do Replay operations.  It contains the content of any library that was loaded at run time. Without a copy, Replay can't work reliably. olivergondza : Indeed, deleting scripts that may well be executing is unsafe. However in general deleteDir(), IMHO should remove any copy on a slave (provided the slave isn't actually the master)

          Martin Karing added a comment -

          stevenatcisco: I don't think the directory is required for the replay operation. If I delete the @libs workspace on the master by hand, replay still work fine. There is a copy of the required files of the library in the directory of all builds. I am guessing this one is utilized for what ever the replay requires it for, or the required commit is just checked out from the repository again.

          No matter, there is still a @libs directory in the directory for the master workspaces, that just sits there forever. It contains the in my case implicitly loaded shared libraries for each job. And there are a lot of them due to heavy use of feature branches and pull requests. It can't be deleted by the pipeline and it's not automatically deleted if the associated job in Jenkins is deleted. Also I do not have any build processors enabled on the master node, so that workaround with switching to the problematic directory directly does not fly as well, because I can't get into the master workspaces with the pipelines as far as I know.

          The one way to get rid of those directories I was able to come up with is to have the server run a nightly cron job that clears our the workspace directory of the master, so the issue does not get out of hand. That solution works, but I'd rather solve that issue inside of Jenkins.

          Martin Karing added a comment - stevenatcisco : I don't think the directory is required for the replay operation. If I delete the @libs workspace on the master by hand, replay still work fine. There is a copy of the required files of the library in the directory of all builds. I am guessing this one is utilized for what ever the replay requires it for, or the required commit is just checked out from the repository again. No matter, there is still a @libs directory in the directory for the master workspaces, that just sits there forever. It contains the in my case implicitly loaded shared libraries for each job. And there are a lot of them due to heavy use of feature branches and pull requests. It can't be deleted by the pipeline and it's not automatically deleted if the associated job in Jenkins is deleted. Also I do not have any build processors enabled on the master node, so that workaround with switching to the problematic directory directly does not fly as well, because I can't get into the master workspaces with the pipelines as far as I know. The one way to get rid of those directories I was able to come up with is to have the server run a nightly cron job that clears our the workspace directory of the master, so the issue does not get out of hand. That solution works, but I'd rather solve that issue inside of Jenkins.

          Tim Jacomb added a comment -

          I'm not sure if this is filed against the correct plugin, in my opinion libs should be automatically cleaned up when the build discarder is run.

          I think this should be either done in workflow-cps-global-lib plugin or a new extension plugin.

          Thoughts bitwiseman or dnusbaum

          I can possibly contribute a fix, but just looking for a direction on where this should be done.

          Tim Jacomb added a comment - I'm not sure if this is filed against the correct plugin, in my opinion libs should be automatically cleaned up when the build discarder is run. I think this should be either done in workflow-cps-global-lib plugin or a new extension plugin. Thoughts bitwiseman or dnusbaum I can possibly contribute a fix, but just looking for a direction on where this should be done.

          Tim Jacomb added a comment - - edited

          Opened a PR to Jenkins core for cleaning this up during workspace cleanup https://github.com/jenkinsci/jenkins/pull/4824

          Tim Jacomb added a comment - - edited Opened a PR to Jenkins core for cleaning this up during workspace cleanup https://github.com/jenkinsci/jenkins/pull/4824

          Oleg Nenashev added a comment -

          timja Hi. Does https://www.jenkins.io/changelog/#v2.244 fully address it from your PoV?

          Oleg Nenashev added a comment - timja Hi. Does  https://www.jenkins.io/changelog/#v2.244  fully address it from your PoV?

          Tim Jacomb added a comment - - edited

          Yes I believe so,

          To any of the many watchers, from Jenkins 2.244 these directories will be automatically deleted during workspace cleanup.

          This retains workspaces for 30 days by default and then deletes them, I can't see any documentation on it, but the configuration values can be found here:
          https://www.jenkins.io/doc/book/managing/system-properties/#hudson-model-workspacecleanupthread-retainfordays

          Note: If the workspace has already been deleted then you'll need to delete existing libs / tmp / 2 directories manually

          Tim Jacomb added a comment - - edited Yes I believe so, To any of the many watchers, from Jenkins 2.244 these directories will be automatically deleted during workspace cleanup. This retains workspaces for 30 days by default and then deletes them, I can't see any documentation on it, but the configuration values can be found here: https://www.jenkins.io/doc/book/managing/system-properties/#hudson-model-workspacecleanupthread-retainfordays Note: If the workspace has already been deleted then you'll need to delete existing libs / tmp / 2 directories manually

          So how exactly is this used?  Does using deleteDir() clean out the entire workspace and associated "@" directories?  Is there any impact on cleanWs()?

          Camden Mamigonian added a comment - So how exactly is this used?  Does using deleteDir() clean out the entire workspace and associated "@" directories?  Is there any impact on cleanWs()?

          Tim Jacomb added a comment -

          cmamigonian does my comment above not explain it well enough?

          Jenkins core workspace cleanup will remove them, deleteDir doesn't do it and neither does cleanWs
          Possibly they could be extended to do it now that the core API is there.

          Tim Jacomb added a comment - cmamigonian does my comment above not explain it well enough? Jenkins core workspace cleanup will remove them, deleteDir doesn't do it and neither does cleanWs Possibly they could be extended to do it now that the core API is there.

          Yes, sorry, your comment is clear in terms of core cleanup.  I wasn't sure if deleteDir or cleanWs support it.  Do we know if those plugin(s) have plans to incorporate?

          Camden Mamigonian added a comment - Yes, sorry, your comment is clear in terms of core cleanup.  I wasn't sure if deleteDir or cleanWs support it.  Do we know if those plugin(s) have plans to incorporate?

          Tim Jacomb added a comment -

          Not that I know of, is there a reason that the automated cleanup isn't enough for you?

          Tim Jacomb added a comment - Not that I know of, is there a reason that the automated cleanup isn't enough for you?

          If we have lots of jobs running on an agent our storage can fill up quickly, and we like to clean up after ourselves once the pipeline is done running.  We currently have a way to do it, but it's through our own code instead of a much simpler single call to either cleanWs or deleteDir.

          Camden Mamigonian added a comment - If we have lots of jobs running on an agent our storage can fill up quickly, and we like to clean up after ourselves once the pipeline is done running.  We currently have a way to do it, but it's through our own code instead of a much simpler single call to either cleanWs or deleteDir.

          Naveen B added a comment -

          We are still seeing this issue in 2.277.1

          @tmp is left behind after

          {{}}

          dir('tst'){
              sh (
                  label : "ls",
                  script : "ls"
              )
              deleteDir()
          }

          {{}}

          Naveen B added a comment - We are still seeing this issue in 2.277.1 @tmp is left behind after {{}} dir( 'tst' ){ sh ( label : "ls" , script : "ls" ) deleteDir() } {{}}

          Tim Jacomb added a comment -

          The resolution was a background task that will remove them, deleteDir only deletes the current directory

          Tim Jacomb added a comment - The resolution was a background task that will remove them, deleteDir only deletes the current directory

          Jesse Glick added a comment -

          Yeah the Fixed resolution does not match the reported issue summary. deleteDir does exactly what it is documented to do: delete the contextual directory (which could be a subdirectory of the workspace, something else entirely, etc.), just like sh 'rm -rf .' basically.

          Jesse Glick added a comment - Yeah the Fixed resolution does not match the reported issue summary. deleteDir does exactly what it is documented to do: delete the contextual directory (which could be a subdirectory of the workspace, something else entirely, etc.), just like sh 'rm -rf .' basically.

            timja Tim Jacomb
            hiten_prajapati Hiten Prajapati
            Votes:
            67 Vote for this issue
            Watchers:
            91 Start watching this issue

              Created:
              Updated:
              Resolved: