-
Bug
-
Resolution: Unresolved
-
Major
-
Jenkins : 2.89.4
Lockable Resource Plugin : 2.1
Jenkins Pipeline Milestone Step Plugin : 1.3.1
Hi,
I would like to know if there is something wrong with below pipeline script.
Step 1 : Trigger a pipeline build with the error error commented out
Step 2 : Immediately trigger another pipeline build with error error un-commented
Step 3 : You will see that the build 1 gets superseded and aborted as soon as the build 2 errors out.
I would expect build 1 to not get superseded and finish, since build 2 failed, not passed.
I have removed the milestone steps from the script and then repeated Steps 1 & 2. This time, build 1 does not get superseded and do not get aborted.
I wonder if the usage of milestones is flawed here ? If so, how can i change the script to achieve expected behavior
Sample Pipeline Script:
node { timestamps{ milestone() lock(resource: "my_bld_lock", inversePrecedence: true) { milestone() stage("Bld") { sleep 10 } } milestone() lock(resource: "my_pkg_lock", inversePrecedence: true) { milestone() stage("Pkg") { sleep 15 } } milestone() parallel([ "Testing" : { lock(resource: "my_test_lock", inversePrecedence: true) { stage("Test") { sleep 6 } } }, "Second Level Testing": { lock(resource: "my_second_test_lock", inversePrecedence: true) { stage("Second Level Test") { sleep 4 } } }, "Create & Deploy" : { lock(resource: "my_create_lock", inversePrecedence: true) { stage("Create") { sleep 5 // error "error" sleep 30 } } lock(resource: "my_deploy_lock", inversePrecedence: true) { stage("Deploy") { sleep 70 } } }, ]) milestone() } }
[JENKINS-50744] unexpected behavior of milestone steps
Description |
Original:
Hi, We have noticed, that the locks were not handled / cleaned up properly during parallel phase executions. Sample Pipeline Script: {code:java} node { milestone() lock(resource: "my_bld_lock", inversePrecedence: true) { milestone() stage("Bld") { sleep 5 } } milestone() parallel([ "Testing": { lock(resource: "my_test_lock", inversePrecedence: true) { stage("Test") { // error "error" sleep 10 } } }, "Second Level Testing": { lock(resource: "my_second_test_lock", inversePrecedence: true) { stage("Second Level Test") { sleep 10 } } }, "Deployment": { lock(resource: "my_deploy_lock", inversePrecedence: true) { stage("Deploy") { sleep 90 } } }, failFast: true ]) } {code} Scenario : Trigger a pipeline build with with above pipeline script first and then trigger a second one with the line _error "error"_ un-commented. You will see that, the first pipeline build gets killed (ABORTED or NOT_BUILT) as soon as the second pipeline build errors out. The first pipeline build console shows that it is Superseded by the second build, which obviously should not be the case, the second build was waiting for the lock "my_deploy_lock" and it should not have killed the build that possessed the lock. Note: failFast is required in this case, the problem is that the errored build is killing another build |
New:
Hi, We have noticed, that the locks were not handled / cleaned up properly during parallel phase executions. Sample Pipeline Script: {code:java} node { milestone() lock(resource: "my_bld_lock", inversePrecedence: true) { milestone() stage("Bld") { sleep 5 } } milestone() parallel([ "Testing" : { lock(resource: "my_test_lock", inversePrecedence: true) { stage("Test") { // error "error" sleep 10 } } }, "Second Level Testing": { lock(resource: "my_second_test_lock", inversePrecedence: true) { stage("Second Level Test") { sleep 10 } } }, "Deployment" : { lock(resource: "my_deploy_lock", inversePrecedence: true) { stage("Deploy") { sleep 90 } } }, failFast : true ]) } {code} Scenario : Trigger a pipeline build with with above pipeline script first and then trigger a second one with the line _error "error"_ un-commented. You will see that, the first pipeline build gets killed (ABORTED or NOT_BUILT) as soon as the second pipeline build errors out. The first pipeline build console shows that it is Superseded by the second build, which obviously should not be the case, the second build was waiting for the lock "my_deploy_lock" and it should not have killed the build that possessed the lock. Note: failFast is required in this case, the problem is that the errored build is killing another build |
Component/s | New: pipeline [ 21692 ] |
Component/s | Original: pipeline [ 21692 ] |
Labels | Original: lockable parallel pipeline | New: lockable milestone parallel pipeline |
Component/s | New: pipeline-milestone-step-plugin [ 21448 ] | |
Component/s | Original: lockable-resources-plugin [ 18222 ] |
Labels | Original: lockable milestone parallel pipeline | New: milestone parallel pipeline |
Description |
Original:
Hi, We have noticed, that the locks were not handled / cleaned up properly during parallel phase executions. Sample Pipeline Script: {code:java} node { milestone() lock(resource: "my_bld_lock", inversePrecedence: true) { milestone() stage("Bld") { sleep 5 } } milestone() parallel([ "Testing" : { lock(resource: "my_test_lock", inversePrecedence: true) { stage("Test") { // error "error" sleep 10 } } }, "Second Level Testing": { lock(resource: "my_second_test_lock", inversePrecedence: true) { stage("Second Level Test") { sleep 10 } } }, "Deployment" : { lock(resource: "my_deploy_lock", inversePrecedence: true) { stage("Deploy") { sleep 90 } } }, failFast : true ]) } {code} Scenario : Trigger a pipeline build with with above pipeline script first and then trigger a second one with the line _error "error"_ un-commented. You will see that, the first pipeline build gets killed (ABORTED or NOT_BUILT) as soon as the second pipeline build errors out. The first pipeline build console shows that it is Superseded by the second build, which obviously should not be the case, the second build was waiting for the lock "my_deploy_lock" and it should not have killed the build that possessed the lock. Note: failFast is required in this case, the problem is that the errored build is killing another build |
New:
Hi, I would like to know if there is something wrong with below pipeline script. Step 1 : Trigger a pipeline build with the {{error error}} commented out Step 2 : Immediately trigger another pipeline build with {{error error}} un-commented Step 3 : You will see that the build 1 gets superseded and aborted as soon as the build 2 errors out. I would expect build 1 to not get superseded and finish, since build 2 {color:#FF0000}*failed*{color}, not *passed*. I have removed the milestone steps from the script and then repeated Steps 1 & 2. This time, build 1 does not get superseded and aborted. I wonder if the usage of milestones is flawed here ? If so, how can i change the script to achieve expected behavior Sample Pipeline Script: {code:java} node { timestamps{ milestone() lock(resource: "my_bld_lock", inversePrecedence: true) { milestone() stage("Bld") { sleep 10 } } milestone() lock(resource: "my_pkg_lock", inversePrecedence: true) { milestone() stage("Pkg") { sleep 15 } } milestone() parallel([ "Testing" : { lock(resource: "my_test_lock", inversePrecedence: true) { stage("Test") { sleep 6 } } }, "Second Level Testing": { lock(resource: "my_second_test_lock", inversePrecedence: true) { stage("Second Level Test") { sleep 4 } } }, "Create & Deploy" : { lock(resource: "my_create_lock", inversePrecedence: true) { stage("Create") { sleep 5 // error "error" sleep 30 } } lock(resource: "my_deploy_lock", inversePrecedence: true) { stage("Deploy") { sleep 70 } } }, ]) milestone() } } {code} |
Description |
Original:
Hi, I would like to know if there is something wrong with below pipeline script. Step 1 : Trigger a pipeline build with the {{error error}} commented out Step 2 : Immediately trigger another pipeline build with {{error error}} un-commented Step 3 : You will see that the build 1 gets superseded and aborted as soon as the build 2 errors out. I would expect build 1 to not get superseded and finish, since build 2 {color:#FF0000}*failed*{color}, not *passed*. I have removed the milestone steps from the script and then repeated Steps 1 & 2. This time, build 1 does not get superseded and aborted. I wonder if the usage of milestones is flawed here ? If so, how can i change the script to achieve expected behavior Sample Pipeline Script: {code:java} node { timestamps{ milestone() lock(resource: "my_bld_lock", inversePrecedence: true) { milestone() stage("Bld") { sleep 10 } } milestone() lock(resource: "my_pkg_lock", inversePrecedence: true) { milestone() stage("Pkg") { sleep 15 } } milestone() parallel([ "Testing" : { lock(resource: "my_test_lock", inversePrecedence: true) { stage("Test") { sleep 6 } } }, "Second Level Testing": { lock(resource: "my_second_test_lock", inversePrecedence: true) { stage("Second Level Test") { sleep 4 } } }, "Create & Deploy" : { lock(resource: "my_create_lock", inversePrecedence: true) { stage("Create") { sleep 5 // error "error" sleep 30 } } lock(resource: "my_deploy_lock", inversePrecedence: true) { stage("Deploy") { sleep 70 } } }, ]) milestone() } } {code} |
New:
Hi, I would like to know if there is something wrong with below pipeline script. Step 1 : Trigger a pipeline build with the {{error error}} commented out Step 2 : Immediately trigger another pipeline build with {{error error}} un-commented Step 3 : You will see that the build 1 gets superseded and aborted as soon as the build 2 errors out. I would expect build 1 to not get superseded and finish, since build 2 {color:#ff0000}*failed*{color}, not *passed*. I have removed the milestone steps from the script and then repeated Steps 1 & 2. This time, build 1 does not get superseded and aborted. I wonder if the usage of milestones is flawed here ? If so, how can i change the script to achieve expected behavior Sample Pipeline Script: {code:java} node { timestamps{ milestone() lock(resource: "my_bld_lock", inversePrecedence: true) { milestone() stage("Bld") { sleep 10 } } milestone() lock(resource: "my_pkg_lock", inversePrecedence: true) { milestone() stage("Pkg") { sleep 15 } } milestone() parallel([ "Testing" : { lock(resource: "my_test_lock", inversePrecedence: true) { stage("Test") { sleep 6 } } }, "Second Level Testing": { lock(resource: "my_second_test_lock", inversePrecedence: true) { stage("Second Level Test") { sleep 4 } } }, "Create & Deploy" : { lock(resource: "my_create_lock", inversePrecedence: true) { stage("Create") { sleep 5 // error "error" sleep 30 } } lock(resource: "my_deploy_lock", inversePrecedence: true) { stage("Deploy") { sleep 70 } } }, ]) milestone() } } {code} |
Description |
Original:
Hi, I would like to know if there is something wrong with below pipeline script. Step 1 : Trigger a pipeline build with the {{error error}} commented out Step 2 : Immediately trigger another pipeline build with {{error error}} un-commented Step 3 : You will see that the build 1 gets superseded and aborted as soon as the build 2 errors out. I would expect build 1 to not get superseded and finish, since build 2 {color:#ff0000}*failed*{color}, not *passed*. I have removed the milestone steps from the script and then repeated Steps 1 & 2. This time, build 1 does not get superseded and aborted. I wonder if the usage of milestones is flawed here ? If so, how can i change the script to achieve expected behavior Sample Pipeline Script: {code:java} node { timestamps{ milestone() lock(resource: "my_bld_lock", inversePrecedence: true) { milestone() stage("Bld") { sleep 10 } } milestone() lock(resource: "my_pkg_lock", inversePrecedence: true) { milestone() stage("Pkg") { sleep 15 } } milestone() parallel([ "Testing" : { lock(resource: "my_test_lock", inversePrecedence: true) { stage("Test") { sleep 6 } } }, "Second Level Testing": { lock(resource: "my_second_test_lock", inversePrecedence: true) { stage("Second Level Test") { sleep 4 } } }, "Create & Deploy" : { lock(resource: "my_create_lock", inversePrecedence: true) { stage("Create") { sleep 5 // error "error" sleep 30 } } lock(resource: "my_deploy_lock", inversePrecedence: true) { stage("Deploy") { sleep 70 } } }, ]) milestone() } } {code} |
New:
Hi, I would like to know if there is something wrong with below pipeline script. Step 1 : Trigger a pipeline build with the {{error error}} commented out Step 2 : Immediately trigger another pipeline build with {{error error}} un-commented Step 3 : You will see that the build 1 gets superseded and aborted as soon as the build 2 errors out. I would expect build 1 to not get superseded and finish, since build 2 {color:#ff0000}*failed*{color}, not *passed*. I have removed the milestone steps from the script and then repeated Steps 1 & 2. This time, build 1 does not get superseded and do not get aborted. I wonder if the usage of milestones is flawed here ? If so, how can i change the script to achieve expected behavior Sample Pipeline Script: {code:java} node { timestamps{ milestone() lock(resource: "my_bld_lock", inversePrecedence: true) { milestone() stage("Bld") { sleep 10 } } milestone() lock(resource: "my_pkg_lock", inversePrecedence: true) { milestone() stage("Pkg") { sleep 15 } } milestone() parallel([ "Testing" : { lock(resource: "my_test_lock", inversePrecedence: true) { stage("Test") { sleep 6 } } }, "Second Level Testing": { lock(resource: "my_second_test_lock", inversePrecedence: true) { stage("Second Level Test") { sleep 4 } } }, "Create & Deploy" : { lock(resource: "my_create_lock", inversePrecedence: true) { stage("Create") { sleep 5 // error "error" sleep 30 } } lock(resource: "my_deploy_lock", inversePrecedence: true) { stage("Deploy") { sleep 70 } } }, ]) milestone() } } {code} |
Assignee | New: Antonio Muñiz [ amuniz ] |