-
Bug
-
Resolution: Duplicate
-
Minor
-
None
-
* minukube
* jenkins/jenkins:2.75 image
* kubernetes-pipeline plugin 0.12
* docker-workflow plugin 1.12
* docker-commons plugin 1.8
* durable-task-plugin 1.14
It seems to be impossible to use Docker Pipeline plugin when inside a Kubernetes-provided node (which is in fact a Kubernetes Pod).
Steps to reproduce:
- run minikube
- checkout kubernetes-plugin repo
- run jenkins from the manifests in the aforementioned repo
- install plugins listed in the "Environment" section
- create a "pipeline" job
- connect a git repo to the job
- create Jenkinsfile (as in the example below)
- run a job build
Jenkinsfile:
podTemplate(label: 'slave', containers: [ containerTemplate(name: 'docker', image: 'docker:dind', ttyEnabled: true, alwaysPullImage: true, privileged: true, command: 'dockerd --host=unix:///var/run/docker.sock --host=tcp://0.0.0.0:2375 --storage-driver=overlay') ], volumes: [emptyDirVolume(memory: false, mountPath: '/var/lib/docker')]) { node('slave') { stage('Run a non-docker thing') { sh 'echo test' sh 'hostname -f' sh 'sleep 3' } stage('Run a docker thing') { container('docker') { stage 'Docker thing1' checkout scm // sh 'docker info' // sh 'docker build -t rmwpl/test:latest .' // sh 'docker images' app = docker.build("rmwpl/test:latest") stage 'docker exec' app.inside { sh 'ls -alh' } } } } }
The error message:
Removing intermediate container 8f6fa2e174e6 Successfully built 238a4104c8fa Successfully tagged rmwpl/test:latest [Pipeline] dockerFingerprintFrom [Pipeline] } [Pipeline] // container [Pipeline] } [Pipeline] // stage [Pipeline] } [Pipeline] // node [Pipeline] } [Pipeline] // podTemplate [Pipeline] End of Pipeline java.lang.IllegalArgumentException: Expecting 64-char full image ID, but got at org.jenkinsci.plugins.docker.commons.fingerprint.DockerFingerprints.getFingerprintHash(DockerFingerprints.java:71) at org.jenkinsci.plugins.docker.commons.fingerprint.DockerFingerprints.forDockerInstance(DockerFingerprints.java:148) at org.jenkinsci.plugins.docker.commons.fingerprint.DockerFingerprints.forImage(DockerFingerprints.java:115) at org.jenkinsci.plugins.docker.commons.fingerprint.DockerFingerprints.forImage(DockerFingerprints.java:100) at org.jenkinsci.plugins.docker.commons.fingerprint.DockerFingerprints.addFromFacet(DockerFingerprints.java:260) at org.jenkinsci.plugins.docker.workflow.FromFingerprintStep$Execution.run(FromFingerprintStep.java:119) at org.jenkinsci.plugins.docker.workflow.FromFingerprintStep$Execution.run(FromFingerprintStep.java:75) at org.jenkinsci.plugins.workflow.steps.AbstractSynchronousNonBlockingStepExecution$1$1.call(AbstractSynchronousNonBlockingStepExecution.java:47) at hudson.security.ACL.impersonate(ACL.java:260) at org.jenkinsci.plugins.workflow.steps.AbstractSynchronousNonBlockingStepExecution$1.run(AbstractSynchronousNonBlockingStepExecution.java:44) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Finished: FAILURE
I did some debugging with docker-commons and docker-workflow plugins (adding some custom log messages) and it looks like the stdout is not getting through from hudson.Launcher to docker-commons (as can be seen in the above stack trace, there isn't anything after "but got"). The same functionaliy (app = docker.build("whatever")) works when it isn't run from inside Kubernetes (via kubernetes-plugin).
More debugging has led me to durable-task-plugin which seems to be responsible for swallowing the stdout and stderr streams.
Actually, I've even modified the code launching appropriate "docker inspect" command in docker-workflow plugin to run "hostname" instead and its stdout is also not getting through to the calling process, so the problem does not seem to be related to dind in any way.
Note that the commented-out code in my Jenkinsfile works fine, i.e. I'm able to build the image "by hand", but I'm aiming to use both the plugins (kubernetes-plugin and docker-workflow-plugin aka docker-pipeline-plugin) together.
Right before the stacktrace, I was seeing the "SEVERE: onClose called but latch already finished. This indicates a bug in the kubernetes-plugin" message in the log, so I though it might be related to https://issues.jenkins-ci.org/browse/JENKINS-45885 and fixed in https://github.com/jenkinsci/kubernetes-plugin/pull/182, so I tried to use a snapshot built from master, but to no effect – stil getting the same stracktrace.
(Previously filed as a comment to JENKINS-39664)
- duplicates
-
JENKINS-39664 Docker builds do not work with Kubernetes Pipeline plugin
- Closed
- is related to
-
JENKINS-39664 Docker builds do not work with Kubernetes Pipeline plugin
- Closed