I have exactly the same issue - I want to lock a single stage with the lock based on the Git repo address that triggered the build - this allows me to update a 2nd repo containing package files - one file per repo, safe in the knowledge that no other Jenkins process is updating exactly the the same file (but could update other files in the same repo at the same time). This means I can be sure that any resulting merge is a fast-forward if two separate files were updated in the package repo at the same time - which allows for a significant parallelization win across different development repos (unrelated projects don't queue to update the package manager).
From what lglussen is saying - it sounds like it might be possible to include a parameter in the trigger from the development repo from Bitbucket/Github which could then be used to lock the stage - I'll have a look at this (thanks for the tip!).
Even with this workaround it's a duplication of data that is already automatically passed to Jenkins as part of the trigger - which isn't ideal, and is a bit of maintenance effort to update all the triggers to do this. So a proper fix to the original problem would be great!
I use the options style lock to wrap a few stages in a large pipeline and I have the same problem. Even though the lock is deep within the pipeline, it seems the options are per-computed before the run and the ENV variables are not yet injected. For my use case - I would really like to use the `GIT_URL` provided by the multibranch pipeline project as the source of my lock. I don't mind multiple branches building at once, but there is a section they can't be in at the same time if they are on the same repo.
If you have a parameterized build - I believe parameters are available to the options blocks - but not environment variables.