I can confirm that when running a (Scripted or Declarative) Pipeline job whose ThrottleJobProperty has throttleOption set to category and whose corresponding ThrottleCategory has a positive value of maxConcurrentPerNode, ThrottleCategory#maxConcurrentPerNode is not respected (as demonstrated in ThrottleJobPropertyPipelineTest#onePerNode). Note that ThrottleCategory#maxConcurrentPerNode is respected for Freestyle jobs (as demonstrated in ThrottleJobPropertyFreestyleTest#onePerNode) and for (Scripted) Pipeline jobs that use ThrottleStep rather than ThrottleJobProperty (as demonstrated in ThrottleStepTest#onePerNode).
There seem to be a two things at play preventing this:
1. the method to get the job's throttle settings only works on instances of hudson.model.Job, but for pipelines (at least Scripted pipelines) the task is actually a
ExecutorStepExecution$PlaceholderTask. It is easy to add access to the throttle property by checking for this type and getting the property from task.getOwnerTask()
2. the logic to count whether a running task is equal to a queued task needs to be expanded to include pipelines. The task passed to ThrottleQueueTaskDispatcher#buildsOnExecutor is a PlaceholderTask when checking the job's properties and equality can be checked with
but when checking based on category, the task passed is the actual WorkflowJob, and then this equality check works:
but, then pipelines get counted twice because they're counted both on a flyweight executor and regular executors, even before the job starts. I don't know what flyweight executors are or, more importantly, why they're being counted. They don't appear to come into play for freestyle jobs. On my local fork where I've been experimenting, I'm tempted to just skip counting flyweights for these two cases, and then the per node limits seem to work as you'd expect them too. Obviously the flyweight counts are there for a reason, though. Those equalities above are also from trial and error so they may not be comparing the right objects conceptually.
I could open a PR that does all this, but since I only use scripted pipelines, it's very likely what I'm doing is not type safe and will break for either other job types or other uses of this plugin.