-
Bug
-
Resolution: Unresolved
-
Blocker
-
None
-
Jenkins version: 2.361.1
Amazon EC2 Plugin version: 2.0.4
Hi team,
I have configured the Amazon EC2 plugin to launch a slave node when both executers of main jenkins are occupied in building the job.
There is one job for deployment that has slave node configured as agent. That is this job can only run on the slave node. Whenever I trigger a job a new node gets launched using the Amazon EC2 plugin and this job starts executing on the slave node.
But after sometime jenkins on slave node restarts and deletes the workspace of the current build and the job fails with an error "Could not find the workspace"
This only happens for the workspaces that are not present on main jenkins. Ex: I have created a multibranch pipeline and I am triggering a build for new feature branch for which workspace is not there on main jenkins. Then I will get an error "Could not find the workspace". If I remove the agent from this pipeline and build the pipeline on main jenkins then workspace for new branch will get created on main jenkins and then if I build the job on slave node then it will work fine even though workspace is deleted(as I can see in the jenkins logs). In this case, again and again I have to trigger the build on main jenkins for the build to work smooth on node jenkins.