Sylvain Chagnaud says: Hi, it's a very useful plugin. But I have a little comment for a specific case. ...
      Hi, it's a very useful plugin. But I have a little comment for a specific case. In fact I want to revert to a snapshot before each job execution but if several jobs try to run on the VMWare at the same time then the VMWare can't revert to the snapshot. Do you have a solution, thank you ?

      jswager - says: At the moment, I don't have a solution incorporated into the plugin. But I...
      At the moment, I don't have a solution incorporated into the plugin. But I'm working on it. In the meantime, here's a workaround:
      1) Configure the slave with a disconnect type of Revert or Reset.
      2) In your test job, mark the slave temporarily offline. This will prevent further jobs from targeting the slave. It would be something like this:
      curl -d "offlineMessage=back_in_a_moment&json=%7B%22offlineMessage%22%3A+%22back_in_a_moment%22%7D&Submit=Mark+this+node+temporarily+offline" http://JENKINS_HOST/computer/NODE_TO_DISCONNECT/toggleOffline
      3) As a final step in your job, start a delayed shutdown of the VM. If it's Windows, something like "shutdown /s /t

          [JENKINS-12163] Cleaning VM before start next job in queue.

          Alexey Larsky added a comment - - edited

          The workaround causing hangup on second start with message:

          • Not launching VM because it's not accepting tasks

          It works only after toggling node online by hand.

          Alexey Larsky added a comment - - edited The workaround causing hangup on second start with message: Not launching VM because it's not accepting tasks It works only after toggling node online by hand.

          My workaround is:

          1. Create jobs which
            1. Shutdown a VM using PowerCLI and PowerShell Plugin
            2. Revert to a snapthot using PowerCLI and PowerShell Plugin
            3. Power on a VM using PowerCLI and PowerShell Plugin
          2. Call jobs above using Parameterized Trigger Plugin
          3. Call job which I want to be run on the slave using Parameterized Trigger Plugin and NodeLabel Plugin

          OHTAKE Tomohiro added a comment - My workaround is: Create jobs which Shutdown a VM using PowerCLI and PowerShell Plugin Revert to a snapthot using PowerCLI and PowerShell Plugin Power on a VM using PowerCLI and PowerShell Plugin Call jobs above using Parameterized Trigger Plugin Call job which I want to be run on the slave using Parameterized Trigger Plugin and NodeLabel Plugin

          Alexey Larsky added a comment -

          I don't checked the last workaround, but it needs two jobs per one task. It's minus.

          Alexey Larsky added a comment - I don't checked the last workaround, but it needs two jobs per one task. It's minus.

          nyoung02 added a comment -

          I have a similar (identical?) use case.

          I have a pool of slave machines, each with a set of snapshots, running under vSphere.
          I have a number of build and test jobs, each of which require a slightly different configuration (snapshot).

          What I want to do is have these machines switched off until required, and when a job is initiated, it picks up the next available (offline) slave, resets the slave VM to the specified snapshot, and then starts it up. When the job is finished, it then closes down the machine and disconnects the slave.

          This would probably work with the delayed shutdown and I'll give that a go, but it would be better (cleaner and more reliable) if there was a tick box option in the job configuration that allowed us to close the machine on completion of the job. The other alternative is if the slave configuration could be modified to allow a reset snapshot option between jobs.

          Background: Our test process depends on the machine being clean at the start - if it's not we get phantom test failures. This means it's essential that the VM is reverted to snapshot at the start of the run. Currently have the option set to shut down the VM if it's idle for 1 minute, but if we have jobs queued up, there is no idle time and so therefore we end up with invalid test runs.

          nyoung02 added a comment - I have a similar (identical?) use case. I have a pool of slave machines, each with a set of snapshots, running under vSphere. I have a number of build and test jobs, each of which require a slightly different configuration (snapshot). What I want to do is have these machines switched off until required, and when a job is initiated, it picks up the next available (offline) slave, resets the slave VM to the specified snapshot, and then starts it up. When the job is finished, it then closes down the machine and disconnects the slave. This would probably work with the delayed shutdown and I'll give that a go, but it would be better (cleaner and more reliable) if there was a tick box option in the job configuration that allowed us to close the machine on completion of the job. The other alternative is if the slave configuration could be modified to allow a reset snapshot option between jobs. Background: Our test process depends on the machine being clean at the start - if it's not we get phantom test failures. This means it's essential that the VM is reverted to snapshot at the start of the run. Currently have the option set to shut down the VM if it's idle for 1 minute, but if we have jobs queued up, there is no idle time and so therefore we end up with invalid test runs.

          Sylvain C. added a comment - - edited

          My solution :

          • Configure the slave with a disconnect type of Shutdown or nothing.
          • Create two new jobs :
          • SwitchOffLineNode (Change status of a node from Online to Offline)
            With a string parameter : name = NODE
            With a build step of windows batch type :
            cd %CURL_HOME% && curl \-k \-u USERNAME:USERPASSWORD \-d "offlineMessage=back_in_a_moment&json=%7B%22offlineMessage%22%3A+%22back_in_a_moment%22%7D&Submit=Mark+this+node+temporarily+offline" %JENKINS_URL%computer/%NODE%/toggleOffline > nul 2>&1

            Replace USERNAME & USERPASSWORD

          • SwitchOnLineNode (Change status of a node from OffLine to Online)
            With a string parameter : name = NODE
            With a build step of windows batch type:
            cd %CURL_HOME% && curl \-k \-u USERNAME:USERPASSWORD \-d "Submit=This+node+is+back+online" %JENKINS_URL%computer/%NODE%/toggleOffline > nul 2>&1

            Replace USERNAME & USERPASSWORD

          • Configure your job :
            First build step "windows batch command":
            set json="{\"parameter\": [{\"name\":\"NODE\", \"value\": \"%NODE_NAME%\"}], \"\": \"\"}"
            set url=%JENKINS_URL%/job/SwitchOfflineNode/build?delay=60sec
            cd %CURL_HOME% && curl -k -u USERNAME:USERPASSWORD -X POST %url% --data-urlencode json=%json%

            Replace USERNAME & USERPASSWORD
            This build step switch offline the vmware to stop new build

          • On "Post build task" of your job (download post build plugin)
            Log Text = "Building" (to execute this task all the time like a "finally")
            Script
            set json="{\"parameter\": [{\"name\": \"NODE\", \"value\": \"%NODE_NAME%\"}], \"\": \"\"}"
            set url=%JENKINS_URL%/job/SwitchOnlineNode/build?delay=60sec
            cd %CURL_HOME% && curl -k -u USERNAME:USERPASSWORD -X POST %url% --data-urlencode json=%json%
            shutdown -s -f -t 30

            Replace USERNAME & USERPASSWORD
            This script turns off the vmware and after switches online the vmware to execute new job.

          I use this solution to run "testcomplete" (graphic tests) jobs and I must run tests from a clean machine.

          Sylvain C. added a comment - - edited My solution : Configure the slave with a disconnect type of Shutdown or nothing. Create two new jobs : SwitchOffLineNode (Change status of a node from Online to Offline) With a string parameter : name = NODE With a build step of windows batch type : cd %CURL_HOME% && curl \-k \-u USERNAME:USERPASSWORD \-d "offlineMessage=back_in_a_moment&json=%7B%22offlineMessage%22%3A+%22back_in_a_moment%22%7D&Submit=Mark+this+node+temporarily+offline" %JENKINS_URL%computer/%NODE%/toggleOffline > nul 2>&1 Replace USERNAME & USERPASSWORD SwitchOnLineNode (Change status of a node from OffLine to Online) With a string parameter : name = NODE With a build step of windows batch type: cd %CURL_HOME% && curl \-k \-u USERNAME:USERPASSWORD \-d "Submit=This+node+is+back+online" %JENKINS_URL%computer/%NODE%/toggleOffline > nul 2>&1 Replace USERNAME & USERPASSWORD Configure your job : First build step "windows batch command": set json= "{\" parameter\ ": [{\" name\ ":\" NODE\ ", \" value\ ": \" %NODE_NAME%\ "}], \" \ ": \" \ "}" set url=%JENKINS_URL%/job/SwitchOfflineNode/build?delay=60sec cd %CURL_HOME% && curl -k -u USERNAME:USERPASSWORD -X POST %url% --data-urlencode json=%json% Replace USERNAME & USERPASSWORD This build step switch offline the vmware to stop new build On "Post build task" of your job (download post build plugin) Log Text = "Building" (to execute this task all the time like a "finally") Script set json= "{\" parameter\ ": [{\" name\ ": \" NODE\ ", \" value\ ": \" %NODE_NAME%\ "}], \" \ ": \" \ "}" set url=%JENKINS_URL%/job/SwitchOnlineNode/build?delay=60sec cd %CURL_HOME% && curl -k -u USERNAME:USERPASSWORD -X POST %url% --data-urlencode json=%json% shutdown -s -f -t 30 Replace USERNAME & USERPASSWORD This script turns off the vmware and after switches online the vmware to execute new job. I use this solution to run "testcomplete" (graphic tests) jobs and I must run tests from a clean machine.

          Code changed in jenkins
          User: Jason Swager
          Path:
          pom.xml
          src/main/java/org/jenkinsci/plugins/vSphereCloudLauncher.java
          src/main/java/org/jenkinsci/plugins/vSphereCloudRunListener.java
          src/main/java/org/jenkinsci/plugins/vSphereCloudSlave.java
          src/main/resources/org/jenkinsci/plugins/vSphereCloudSlave/configure-entries.jelly
          src/main/webapp/slave-LimitedTestRunCount.html
          http://jenkins-ci.org/commit/vsphere-cloud-plugin/cc473a3d10142042bf1d53e139c3302b0be494be
          Log:
          Changes to address JENKINS-12163: Clean the VM after X many builds.

          SCM/JIRA link daemon added a comment - Code changed in jenkins User: Jason Swager Path: pom.xml src/main/java/org/jenkinsci/plugins/vSphereCloudLauncher.java src/main/java/org/jenkinsci/plugins/vSphereCloudRunListener.java src/main/java/org/jenkinsci/plugins/vSphereCloudSlave.java src/main/resources/org/jenkinsci/plugins/vSphereCloudSlave/configure-entries.jelly src/main/webapp/slave-LimitedTestRunCount.html http://jenkins-ci.org/commit/vsphere-cloud-plugin/cc473a3d10142042bf1d53e139c3302b0be494be Log: Changes to address JENKINS-12163 : Clean the VM after X many builds.

          Jason Swager added a comment -

          Jason Swager added a comment - Fixed via https://github.com/jenkinsci/vsphere-cloud-plugin/commit/cc473a3d10142042bf1d53e139c3302b0be494be Targeted for 0.6

          Oren Chapo added a comment -

          Unfortunately, this doesn't work for pipeline jobs.
          If I set "Disconnect After Limited Builds" value to 1, the "What to do when the slave is disconnected" action never happens.

          It seems like "part of *" pipeline jobs that run on slaves are not detected as real jobs. I suspect this because "build history" is empty for a node that only runs parts of pipeline jobs.

          The outcome is inability to ensure each test starts with a clean (from snapshot) VM. If other "part of" jobs are queued for the node, they will start running the moment a previous (partial) job finishes. I've tried the suggested workarounds, but ended up either breaking the queued job (which starts with "dirty" machine, or breaks because the VM stops after the job started running) or breaking the current job (which gets stuck because I shutdown the slave before the job is finished).

          Oren Chapo added a comment - Unfortunately, this doesn't work for pipeline jobs. If I set "Disconnect After Limited Builds" value to 1, the "What to do when the slave is disconnected" action never happens. It seems like "part of *" pipeline jobs that run on slaves are not detected as real jobs. I suspect this because "build history" is empty for a node that only runs parts of pipeline jobs. The outcome is inability to ensure each test starts with a clean (from snapshot) VM. If other "part of" jobs are queued for the node, they will start running the moment a previous (partial) job finishes. I've tried the suggested workarounds, but ended up either breaking the queued job (which starts with "dirty" machine, or breaks because the VM stops after the job started running) or breaking the current job (which gets stuck because I shutdown the slave before the job is finished).

          Do you have any updates on fixing this issue for Pipelines ?

          Thank you

          Alexandru Calistru added a comment - Do you have any updates on fixing this issue for Pipelines ? Thank you

          I experience the same as Oren Shapo. Jenkins fails to register any history indicating that the job actually ran on the node.

          Has anybody found a workaround that enabled you to reboot the slave after a single build?

          Søren Rasmussen added a comment - I experience the same as Oren Shapo. Jenkins fails to register any history indicating that the job actually ran on the node. Has anybody found a workaround that enabled you to reboot the slave after a single build?

            Unassigned Unassigned
            larsky Alexey Larsky
            Votes:
            4 Vote for this issue
            Watchers:
            8 Start watching this issue

              Created:
              Updated: