Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-34268

Lock multiple resources using the Pipeline lock step

      The current implementation of Pipeline lock step allows to block a single resource.

      It should be extended to cover all the functionality of the plugin (applicable to non-freestyle jobs) such as blocking resources by label or request a lock for N resources.

      The DSL must be something like this:

      lock (resources: ['resource1', 'resource2']) {
        ... execution block ...
      }
      

      or

      lock (label: 'my-resources') {
        ... execution block ...
      }
      

      The behavior of the label parameter would be equivalent to:

      lock (resources: ['resource3', 'resource4']) { // if both resource3 and resource4 are labeled as 'my-resources'
        ... execution block ...
      }
      

          [JENKINS-34268] Lock multiple resources using the Pipeline lock step

          Antonio Muñiz created issue -

          Florian Hug added a comment -

          Besides the actual label functionality, also the possibility to request a certain amount of resources from this label would be essential.

          I would personally prefer something like

          lock(label: 'fooBar', quantity: 2) {
            // whatever
          }
          

          Actually this is what keeps us from using pipelines.

          Florian Hug added a comment - Besides the actual label functionality, also the possibility to request a certain amount of resources from this label would be essential. I would personally prefer something like lock(label: 'fooBar' , quantity: 2) { // whatever } Actually this is what keeps us from using pipelines.

          aeon512 Could you explain a real use case where the quantity configuration makes sense please? It's not clear to me why one would want to acquire 2 resources but do not caring about which ones specifically.

          Antonio Muñiz added a comment - aeon512 Could you explain a real use case where the quantity configuration makes sense please? It's not clear to me why one would want to acquire 2 resources but do not caring about which ones specifically.

          Florian Hug added a comment - - edited

          Background:
          We dynamically create virtual machines which acts as jenkins slave (using the swarm-plugin) to perform the actual build steps. Thereby, if an error occurs we can archive the whole VM such that a developer can investigate and solve the problem. Using linked snapshots the whole purpose of creating such a new VM takes simply seconds and works very well.

          Problem solved by lockable resources:
          Since the overall amount of memory and cores on the host machine is limited, we used the lockable resources plugin to manage the distribution of memory (slices) and cpu cores.

          For example, let's say we have configured 16 resources (Memory Slice 1, Memory Slice 2, ..., Memory Slice 3), each representing e.g. 4 GB Memory. that is a total of 64 GB being available for virtual machines. When a job starts and wants to create a VM with e.g. 16 GB virtual memory, the job simply tries to request 4 slices ( =quantity ) of 4 GB memory

          lock(label: 'memory', quantity: 4) {
             // actual build steps
          }
          

          If this succeeds there is enough memory available, and the job can continue. If not, the job blocks until the necessary amount of memory is available again (since other jobs have finished and shut down their VMs).

          The same principle applies to CPU (cores) as well.

          Does this explanation help?

          Florian Hug added a comment - - edited Background: We dynamically create virtual machines which acts as jenkins slave (using the swarm-plugin ) to perform the actual build steps. Thereby, if an error occurs we can archive the whole VM such that a developer can investigate and solve the problem. Using linked snapshots the whole purpose of creating such a new VM takes simply seconds and works very well. Problem solved by lockable resources: Since the overall amount of memory and cores on the host machine is limited, we used the lockable resources plugin to manage the distribution of memory (slices) and cpu cores. For example, let's say we have configured 16 resources (Memory Slice 1, Memory Slice 2, ..., Memory Slice 3), each representing e.g. 4 GB Memory. that is a total of 64 GB being available for virtual machines. When a job starts and wants to create a VM with e.g. 16 GB virtual memory, the job simply tries to request 4 slices ( =quantity ) of 4 GB memory lock(label: 'memory' , quantity: 4) { // actual build steps } If this succeeds there is enough memory available, and the job can continue. If not, the job blocks until the necessary amount of memory is available again (since other jobs have finished and shut down their VMs). The same principle applies to CPU (cores) as well. Does this explanation help?

          Thanks for take the time to explain!

          Makes sense, although nothing is preventing the steps inside the lock block to consume more CPU/RAM than virtually acquired, right?

          Antonio Muñiz added a comment - Thanks for take the time to explain! Makes sense, although nothing is preventing the steps inside the lock block to consume more CPU/RAM than virtually acquired, right?

          Florian Hug added a comment -

          Yes and no
          No: Once the VM is configured to a certain amount of CPU/RAM, that amount is fixed and no process inside the VM can use more. If a process inside such a VM want's to consume more memory it simply receives an OutOfMemory exception.
          Yes: However, in theory, you can lock a different amount than you configure the VM.

          However, in our pipeline scripts we define a variable, e.g. required_memory_slices and use this variable to specify the necessary quantity as well as use this variable when setting up the virtual machine. Hence, in this case, it is guaranteed that we lock the same amount that we configure the VM to use, and hence no process can consume more CPU/RAM.

          Florian Hug added a comment - Yes and no No: Once the VM is configured to a certain amount of CPU/RAM, that amount is fixed and no process inside the VM can use more. If a process inside such a VM want's to consume more memory it simply receives an OutOfMemory exception. Yes: However, in theory, you can lock a different amount than you configure the VM. However, in our pipeline scripts we define a variable, e.g. required_memory_slices and use this variable to specify the necessary quantity as well as use this variable when setting up the virtual machine. Hence, in this case, it is guaranteed that we lock the same amount that we configure the VM to use, and hence no process can consume more CPU/RAM.
          Antonio Muñiz made changes -
          Link New: This issue is related to JENKINS-30269 [ JENKINS-30269 ]
          Antonio Muñiz made changes -
          Link New: This issue is related to JENKINS-34273 [ JENKINS-34273 ]

          Ok, looks good. I've created a separate issue (JENKINS-34273) for quantity support in Pipeline as it can be handled independently of this one.

          BTW you have there a nice CI configuration

          Antonio Muñiz added a comment - Ok, looks good. I've created a separate issue ( JENKINS-34273 ) for quantity support in Pipeline as it can be handled independently of this one. BTW you have there a nice CI configuration

          Florian Hug added a comment -

          Thanks And we are keen on switching to the new Pipeline - but JENKINS-34273 is absolutely necessary for us.

          Florian Hug added a comment - Thanks And we are keen on switching to the new Pipeline - but JENKINS-34273 is absolutely necessary for us.

            amuniz Antonio Muñiz
            amuniz Antonio Muñiz
            Votes:
            19 Vote for this issue
            Watchers:
            33 Start watching this issue

              Created:
              Updated:
              Resolved: