It is currently not possible to prevent a second copy of the swarm client from being launched accidentally (through various automation errors, timing issues, etc). I support a large swarm that handles cross-platform testing so I have automation on multiple linux and windows OS variants with different init systems. Surprisingly, systemctl units are by far the worst offenders.
Anyway, I feel that -pidFile switch is the place to create this type of locking. Currently I can launch two clients and they will both run. No combination of flags can prevent this. For example:
-name something -master XXX -pidFile /tmp/jenkinspid -disableClientsUniqueId -deleteExistingClients
This will cause the two copies to aggressively knock each other out every few seconds, forever, killing running jobs. Not a good option.
-name something -master XXX -pidFile /tmp/jenkinspid
This will cause the first copy to write a pidfile. The second copy will delete the pidfile(!!) and then fail because that name (with or without -disableClientsUniqueId) is in use, so it will loop waiting to reconnect. Terrible option.
In my opinion, if -pidFile is specified, the second copy should not start (exit >0) when one is already running and matches the pid.