We have a pipeline wherein there is a function that needs to be called from multiple places, where the code within that function needs to run on the node "master". So we have:
def someFunction() { // maybe do some stuff node("master") { // definitely do some stuff that needs master } }
Now what we have recently found out, is that if/when this function is called from code already running on master it will actually lock not just the current executor it is using on master but an additional executor as well.
To further investigate this, we found out that even doing something as simple as...
node("master") { node("master") { sleep(600) } }
...in a test job, actually locks two executors on master, if you have more than one executor, or hangs indefinitely "Waiting for next available executor on 'master'" even though the only thing using the only executor on master is already this block of code.
I'd suspect that this isn't just the case for master, but actually the case for any code specifying that it needs run on a label that it's possible already running on. I think a possible work around for code that needs to execute in this fashion is to write all these blocks of code like this:
if ("${NODE_NAME}" != "master") { node("master") { // do my code } } else { // do my code }
Or else wrap this into a closure of some sort of it's own... and call nodeIfNotAlreadyOn("master") instead or something...
but this isn't exactly very nice, and not exactly the behaviour I was initially expecting.
Furthermore, I honestly can't think of a use case where you would ever want to use two of the exact same label on an explicitly different executor at the exact same time (especially when the first executor with that label is only going to be locked waiting for the code on the second executor to execute), which is why this really feels like more of a bug than anything else to me.