Matt Wilson Sorry for not responding earlier; looking at your post from December last year...
Seeing "Socket closed" at the time when Jenkins is closing the agent down (and hence the socket that talked to that agent) is unremarkable (as far as the docker-plugin is concerned).
I agree that it's ugly, but (I think) that's all it is - ugly - and not evidence of a fault.
(but, if I'm wrong, please tell me why in detail...)
That's normal for Jenkins.
When a job is pending, the Jenkins UI tries to guess why it isn't running yet, but that's all it is - a guess.
So, in this scenario, Jenkins has an agent "DK_COSCOMMON7_D15-0000p4hq2f6yg" that's in the process of closing down (and being deleted) so Jenkins says "waiting for an executor on DK_COSCOMMON7_D15-0000p4hq2f6yg" and then, once that agent has been deleted it then says that it's waiting for an agent with the tags that the job is asking for.
TL;DR: The Jenkins UI misleads users when there's dynamic (cloud) agents being supplied "on demand".
Sadly that's normal; docker-java is overly verbose when it comes to logging exceptions, and here it is logging (as an exception for user-attention) a perfectly normal result that's fully handled by the docker-plugin code. It's doing two container-removals asynchronously, one will get there first and remove the container, the second will be told it's already been removed (and the code handles that) but not before docker-java logging this answer as an exception requiring end-user attention (which is wrong - code should not log exceptions it throws).
TL;DR: Only pay attention to com.github.dockerjava.api.exception stuff when looking for further information surrounding a Jenkins exception that happened at the same time.
Re: "sit in the queue for ever"
Hmm... "that shouldn't happen".
OK, now this sounds like a real issue - what should happen is that the job that's waiting for the executor to appear should result in a new container being created in docker, a new agent being added to Jenkins, and then the job should run on that agent.
That's what should happen.
However, I am aware that there's a (long-standing) bug whereby the docker-plugin can get confused w.r.t. what containers are "in progress" and what isn't ... so can you use the Jenkins "Script Console" (you'll need Jenkins admin rights) and do:
The list printed will show which containers are being created - if this list is not empty, and it looks like the containers are never coming online, you can clear this list to get the Docker plugin to try again:
Note: Restarting the Jenkins server will (effectively) do this too, so anything that can be "fixed by restarting" may be because of this. AFAICT it's mostly caused by changes to the docker-plugin's configuration happening at the same time that other things are going on ... and I've never discovered exactly what/why (or it would've been fixed by now!)
So, if clearing that map un-blocks things then that's another symptom of that bug ... which may then shed additional light on WTF is going on there and thus help resolve it "for good".
Re: attach vs ssh
The container connection method should be irrelevant to this.
Re: node folder
I don't use folders myself; I have zero experience using them and I'm not aware of there being any unit-testing (in the docker-plugin) that verifies that everything continues to work when using them.
It is possible that this issue only exists when folders are being used and disappears when they're not in use; if you (or anyone else) can prove/disprove this then that would be useful information.