Status: Closed (View Workflow)
Resolution: Won't Fix
Lockable resources plugin should not lock a resource when it is offline.
lock(label: <label>, quantity: 3, variable: 'RESOURCES')
will lock 3 resources without checking whether the nodes are offline
when the servers are taken out for maintenance or requires troubleshooting, currently there is no way to prevent lock on the offline resource without ADMIN access (which is basically removing the resource from the Lockable Resources Manager ).
Normal users (without ADMIN access) cannot control locks over an offline resource for maintenance/troubleshooting
As jimklimov already explained: This plugin has no concept of "offline" nodes or even the concept of "nodes" at all. Each lockable resource is just a string. There are several ways to achieve what you propose, the most basic is just creating jobs which take the lock and wait forever and then can be killed when "maintenance" is over - This is probably not the prettiest solution, just the first one that came to mind.
I would agree that the permission concept for the plugin is currently pretty coarse - If you want to see that changed, please open a new ticket and be ready to "bring your own code"...
While there is a good purpose in this request, I believe it is destined to be an offtopic/WontFix here. The plugin manages the generic concept of resources, does not even care if they are physically represented or not (you can use it to e.g. throttle a maximum amount of jobs holding a limited amount of tokens at a time).
That said, you can use the Groovy script option of implementing your use-case dependent logic and eventually return a `true` or `false` about whether a proposed resource is eligible for your job. The plugin calls such script in a loop for each resource (your script should test that it matches the label expression you want then) and makes a list of items with a "true" verdict, to offer one (or first?) of those to be locked by a job.
I am not sure if it levels the load on physical resources (giving first vs random eligible).
As part of this logic you can do anything; for some of our tests in the house, we have indeed a script that probes SSH availability of a remote VM we'd be setting up with our product as part of such logic, so broken VMs are not offered to jobs. A trimmed-down example would be:
With this, we do however miss another ability: to see quickly which resources were last diagnosed dead. Arguably, this is out of LockableResources' scope as well however (rather belongs in zabbix or similar monitoring tool, or maybe a custom job to inspect the list of resources and "lock" broken ones by a specific holder, and unlock fixed ones... maybe along the lines of this https://stackoverflow.com/a/52744986/4715872 fine example).
But it would be helpful to have such a status anyway and have it all displayed in the same list of Available/Reserved/Locked/ Jenkins resources