Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-38906

Remove lock resource name from global configuration when lock is released

      'Lockable Resources' within Jenkins global configuration can grow quickly when lock names are selected which are dynamically created and are transient in nature.

      Would be nice to clean-up the global configuration when locks released by removing the named lock value.

          [JENKINS-38906] Remove lock resource name from global configuration when lock is released

          taylor01, that script has been working as a pipeline job since 10/17, on several Jenkins instances and versions. Wont the "odd errors" related to metrods needed to become approved via the instance scriptApproval service?

          Javier Delgado added a comment - taylor01 , that script has been working as a pipeline job since 10/17, on several Jenkins instances and versions. Wont the "odd errors" related to metrods needed to become approved via the instance scriptApproval service?

          Mikko P added a comment - - edited

          In addition to retainAll, you can use:

          def all_lockable_resources = GlobalConfiguration.all().get(org.jenkins.plugins.lockableresources.LockableResourcesManager.class).resources
          print all_lockable_resources
          all_lockable_resources.removeAll { it.name.contains("whatever_you_want_to_filter") }

          Mikko P added a comment - - edited In addition to retainAll, you can use: def all_lockable_resources = GlobalConfiguration.all().get(org.jenkins.plugins.lockableresources.LockableResourcesManager.class).resources print all_lockable_resources all_lockable_resources.removeAll { it.name.contains("whatever_you_want_to_filter") }

          Shaun Keenan added a comment -

          Bump - curious if anything can be done about this.  Not having the ability to clean up locks that are auto-created seems like more than a "minor" priority...  this could quickly balloon out of control.

          Thanks a lot for the cleanup suggestion mip

          Shaun Keenan added a comment - Bump - curious if anything can be done about this.  Not having the ability to clean up locks that are auto-created seems like more than a "minor" priority...  this could quickly balloon out of control. Thanks a lot for the cleanup suggestion mip

          Sam Van Oort added a comment -

          vivek This one might be worth our team taking up, since locks are used rather heavily.

          Sam Van Oort added a comment - vivek This one might be worth our team taking up, since locks are used rather heavily.

          Ricard F added a comment -

          Same case here!!!!

          Ricard F added a comment - Same case here!!!!

          Aaron D. Marasco added a comment - - edited

          Using code from comments above, I have a Jenkins job that runs weekly to remove the ones that start with certain phrases. This may have race conditions (see later comments):

          stage ("JENKINS-38906") {
            def manager = org.jenkins.plugins.lockableresources.LockableResourcesManager.get()
            def resources = manager.getResources().findAll {
                (!it.locked) && (
                    it.name.startsWith("docker_rpminstalled") ||
                    it.name.startsWith("docker-rpmbuild") ||
                    it.name.startsWith("rpm-deploy")
                )
            }
            currentBuild.description = "${resources.size()} locks"
            resources.each {
                println "Removing ${it.name}"   
                manager.getResources().remove(it)
            }
            manager.save()
          } // stage
          
          

          Aaron D. Marasco added a comment - - edited Using code from comments above, I have a Jenkins job that runs weekly to remove the ones that start with certain phrases. This may have race conditions (see later comments): stage ( "JENKINS-38906" ) { def manager = org.jenkins.plugins.lockableresources.LockableResourcesManager.get() def resources = manager.getResources().findAll { (!it.locked) && ( it.name.startsWith( "docker_rpminstalled" ) || it.name.startsWith( "docker-rpmbuild" ) || it.name.startsWith( "rpm-deploy" ) ) } currentBuild.description = "${resources.size()} locks" resources.each { println "Removing ${it.name}" manager.getResources().remove(it) } manager.save() } // stage

          Sami Korhonen added a comment -

          aarondmarasco_vsi You should synchronize access to LocalbleResouceManager, otherwise you might cause race condition. Considering LocakbleResourcesManager relies on synchronization, it should actually be a trivial task to delete lock after it has been released.

          Sami Korhonen added a comment - aarondmarasco_vsi  You should synchronize access to LocalbleResouceManager, otherwise you might cause race condition. Considering LocakbleResourcesManager relies on synchronization, it should actually be a trivial task to delete lock after it has been released.

          Sami Korhonen added a comment - - edited

          We're using this to delete locks after our ansible plays:

          @NonCPS
          def deleteLocks(lockNames) {
            def manager = org.jenkins.plugins.lockableresources.LockableResourcesManager.get()
            synchronized (manager) {
              manager.getResources().removeAll { r -> lockNames.contains(r.name) && !r.locked && !r.reserved }
              manager.save()
            }
          }
          

           Edit: I had time to study problem further. While this does resolve race condition when deleting item from list, it still isn't sufficient. Current lock allocation algorithm relies that locks are not deleted in any way. I think that's something I can fix. However I think that algorithm needs major rework. Everything related lock management has to be done with atomic operations - and to do so, management must be done in a single class. There might be some scalability issues when allocating hundreds of locks, that could be resolved as well.

          Sami Korhonen added a comment - - edited We're using this to delete locks after our ansible plays: @NonCPS def deleteLocks(lockNames) { def manager = org.jenkins.plugins.lockableresources.LockableResourcesManager.get() synchronized (manager) { manager.getResources().removeAll { r -> lockNames.contains(r.name) && !r.locked && !r.reserved } manager.save() } }  Edit: I had time to study problem further. While this does resolve race condition when deleting item from list, it still isn't sufficient. Current lock allocation algorithm relies that locks are not deleted in any way. I think that's something I can fix. However I think that algorithm needs major rework. Everything related lock management has to be done with atomic operations - and to do so, management must be done in a single class. There might be some scalability issues when allocating hundreds of locks, that could be resolved as well.

          Thanks skorhone for the heads-up. I don't need to worry about race conditions in my unique situation. However, I'll make a note in case others just see it and copypasta.

           

          Aaron D. Marasco added a comment - Thanks skorhone for the heads-up. I don't need to worry about race conditions in my unique situation . However, I'll make a note in case others just see it and copypasta.  

          This should be fixed with the ephemeral lock support in release 2.6 - Everthing that is created automatically is now removed automatically.

          Tobias Gruetzmacher added a comment - This should be fixed with the ephemeral lock support in release 2.6 - Everthing that is created automatically is now removed automatically.

            tgr Tobias Gruetzmacher
            tslifset Ted Lifset
            Votes:
            41 Vote for this issue
            Watchers:
            51 Start watching this issue

              Created:
              Updated:
              Resolved: