Details
-
Type:
Bug
-
Status: Resolved (View Workflow)
-
Priority:
Minor
-
Resolution: Fixed
-
Component/s: docker-workflow-plugin
-
Labels:None
-
Environment:Debian Jessie x64
Docker Pipeline 1.11
Jenkins ver. 2.46.3
Docker version 17.05.0-ce, build 89658be
-
Similar Issues:
-
Released As:docker-workflow 1.19
Description
When using named stages in a multistage build as in the example below, the Jenkins pipeline will fail with the following message right after the build has finished.
<SNIP> Successfully built b59ee5bc6b07 Successfully tagged bytesheep/odr-dabmux:latest [Pipeline] dockerFingerprintFrom [Pipeline] } [Pipeline] // stage [Pipeline] } [Pipeline] // node [Pipeline] End of Pipeline java.io.IOException: Cannot retrieve .Id from 'docker inspectalpine:3.6 AS builder' at org.jenkinsci.plugins.docker.workflow.client.DockerClient.inspectRequiredField(DockerClient.java:193) at org.jenkinsci.plugins.docker.workflow.FromFingerprintStep$Execution.run(FromFingerprintStep.java:119) at org.jenkinsci.plugins.docker.workflow.FromFingerprintStep$Execution.run(FromFingerprintStep.java:75) at org.jenkinsci.plugins.workflow.steps.AbstractSynchronousNonBlockingStepExecution$1$1.call(AbstractSynchronousNonBlockingStepExecution.java:47) at hudson.security.ACL.impersonate(ACL.java:260) at org.jenkinsci.plugins.workflow.steps.AbstractSynchronousNonBlockingStepExecution$1.run(AbstractSynchronousNonBlockingStepExecution.java:44) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748) Finished: FAILURE
Dockerfile
#
# Build environment
#
FROM alpine:3.6 AS builder
<SNIP>
#
# Create final container
#
FROM alpine:3.6
<SNIP>
# Copy artifacts from builder
COPY --from=builder /usr/local .
There is a workaround to this issue by removing the names and using the build numbers instead. example:
FROM alpine:3.6 <SNIP> FROM alpine:3.6 COPY --from=0 /usr/local .
Looking at the related source at #100, it seems the code determains the image name looking at FROM and taking everything until EOL. Which would include 'AS buildname'.
At a glance it also looks like the code will take the first stage for fingerprinting instead of the final stage (which is the resulting image).
Attachments
Issue Links
- is duplicated by
-
JENKINS-44789 docker 17.05 multistage Dockerfile breaks dockerFingerprintFrom
-
- Resolved
-
Activity
Another way you can add the first line
#Build environment
#
FROM alpine:3.6
FROM alpine:3.6 AS builder
<SNIP>
#Create final container
#
FROM alpine:3.6
<SNIP>
#Copy artifacts from builder
COPY --from=builder /usr/local .
Just use sh 'docker build .' rather than Image.build DSL.
Docker multi-stage build throws away intermediate containers as it goes, so the concern above is not going to be able to be fixed in Jenkins code. You can see this in the build output with lines like: `Removing intermediate container 66e6311b3971`
(unless you were meaning to say image, instead of container)
A quick fix to get the final "FROM" would be as simple as changing the "break" to a "continue" in the loop at
If, however, you want to get at the intermediate images, that's going to take a lot more effort.
Eric Smalling be careful, you only need the intermediate image, not the intermediate container (which are effectively deleted as you said).
Those intermediate images are used for the "caching" of the docker builds, and contains the FS (so all the files).
The containers are just instantiations of this immutable image. They are deleted by default because duplicating things (not really true, in term of layers, but let's see things like this).
You can access these intermediate images with the flag "-a" added to "docker image ls" of to "docker images" (if you have an older docker version):
docker image ls -a
Damien Duportal - I think we are saying the same thing. (I was quoting your original comment where you mentioned the intermediate containers.)
What I am saying is that a simple change to loop until the last FROM statement would fix the parsing error and would make docker.build work like it does for non-multi-stage builds.
The problem of obtaining the image ID's for the intermediate images is a bigger one to try to solve which probably should be a separate feature enhancement as opposed to the bug that is occurring with grabbing the first one and pulling the " AS..." part of the line.
Opened tentative PR for this: https://github.com/jenkinsci/docker-workflow-plugin/pull/111
As stated there, am open to enhancements to the JUnit
Fixed and released in v1.13
Marcus van Dam Please test with v1.13 of the plugin and re-open with comments if still an issue for you
Eric Smalling, I have the same issue. I just updated to 1.13 of the Docker Pipeline plugin and am still getting the same error.
Reviewing release build - will update shortly
In case it helps:
Docker version 17.06.2-ce, build cec0b72
Jenkins 2.76
More details. Here is a stub of my Dockerfile. If I understand the attempted fix in 1.13, we are now looking at the last FROM. Since I am aliasing the "Release" step that may be why this is still blowing up.
# Base image FROM 12345/node-base:latest AS base WORKDIR /app # Dependencies FROM base AS dependencies *STUFF* # Test FROM dependencies AS test *STUFF* # Build FROM dependencies AS build *STUFF* # Release FROM base AS release COPY --from=dependencies /app/prod_node_modules ./node_modules COPY --from=build /app/dist ./dist *STUFF*
Yes - I'm sure that's the issue.
I'm curious what is the purpose of using that label on the final stage?
Elegance
Let me try removing it and seeing if that fixes the issue.
Eric Smalling, that did not fix the issue:
Step 17/23 : FROM base
---> 94a0dbc48319
Step 18/23 : COPY --from=dependencies /app/prod_node_modules ./node_modules
---> Using cache
---> ae78693390f4
Step 19/23 : COPY --from=build /app/dist ./dist
---> Using cache
---> cc6134f1e1a6
STUFF
Step 23/23 : WORKDIR /app/dist
---> Using cache
---> dbbe53379363
Successfully built dbbe53379363
Successfully tagged a3-configuration:latest
[Pipeline] dockerFingerprintFrom
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
java.io.IOException: Cannot retrieve .Id from 'docker inspectbase'
Eric Smalling, as a temp workaround, I bet I can change the last from to (instead of "FROM base")
FROM 12345/node-base:latest AS base
WORKDIR /app
But at some point someone (maybe even me) will be doing something a bit more complex in the base image and not want to have to duplicate all the steps in the final build.
Eric Smalling, that fixed my problem. I still recommend we handle the case where the last from in a Docker file is built from an alias.
Ah - I missed the fact that your were coming from a prior stage in your last FROM... not something I've heard people doing since they usually want to come from alpine or something. I can see why you're doing it though - I'll try to get a fix in this week and will post here when I have an hpi to test.
Same issue here with multi-stage builds, any fix on the road? or any way to disable tractability or workaround?
Yasmany Cubela Medina, my workaround was to change the last FROM to not use a named prior build. Instead I re-used the original FROM
I changed this:
# Base image FROM 12345/node-base:latest AS base WORKDIR /app # Dependencies FROM base AS dependencies *STUFF* # Test FROM dependencies AS test *STUFF* # Build FROM dependencies AS build *STUFF* # Release FROM base AS release COPY --from=dependencies /app/prod_node_modules ./node_modules COPY --from=build /app/dist ./dist *STUFF*
TO (notice the last FROM is different):
# Base image FROM 12345/node-base:latest AS base WORKDIR /app # Dependencies FROM base AS dependencies *STUFF* # Test FROM dependencies AS test *STUFF* # Build FROM dependencies AS build *STUFF* # Release FROM 12345/node-base:latest AS base WORKDIR /app COPY --from=dependencies /app/prod_node_modules ./node_modules COPY --from=build /app/dist ./dist *STUFF*
Sorry, I've not had time to look at this further yet. Until I (or someone else) does, I recommend doing as Jesse Glick says and just run docker build via an "sh".
I have been given the go-ahead to attempt to fix this using the --iidfile option with a few hours of my paid work time to fix our use of this feature. Jesse Glick in an informal email exchange agreed that this sounded like a reasonable mode of repair, for what that's worth.
....
So having determined the problem had to do not with being able to acquire the specific id of the just created image but the SOURCE image ids, I decided that the proper way to fix it for multi-stage builds was to make the primitive naive parser (which is making the 'space in my FROM line cause exception bug thanks to a unnamed constant (5) ) more smart - so that 1. knows about build-arg so it can substitute in 2. understands whitespace the same way Docker does.
Unfortunately, ain't got time for that. What I DID notice in analyzing the code is that this exceedingly naive parser will ignore any FROM line that isn't pegged to the first column - so I can put 'FROM scratch\n FROM ${whatever}/thing:${whateverelse} and it won't spot the line with the space on the first column. Fortunately, Docker 17.09 doesn't care about that space - allowing me to fool the plugin into thinking I sourced from scratch when I didn't.
Its a workaround, but it works. I have no further action here at this time, but I hope to sometime get to make this work.
WORKAROUND - MAKE IT LOOK MORE LIKE THIS:
FROM scratch FROM buildimage:$\{POSSIBLY_ARGS} RUN build stuff FROM runimage:${whatever} COPY --from=1 /built/code /deploy/location CMD startup
I'm not an experienced java developer but this has been annoying me for way too long. I've opened a PR here with more fixes for this issue, but could use some guidance. https://github.com/jenkinsci/docker-workflow-plugin/pull/149
Andreas Lutro et al.
I feel it is rather work around. We faced this issue couple of weeks ago and I tried to poke around. So far the only solution I see is to change:
workflow and add additional parameter to DSL like `iidfile="path/to/file"`, so on the other side Docker Workflow plugin can check whether it is set or not and use ID from file instead of relying on naive parser.
I could create PRs for both plugins, but would prefer to discuss this solutions first.
Theoretically `–iidfile` can be added as a part of `buildArgs`, but then Workflow plugin will have to parse build args, in case of separate DSL parameter it is just a simple isEmpty check and a logic like this to get ID:
FilePath dockeridfile = workspace.child(step.iidfile); String id; try(InputStream isid = dockeridfile.read()) { try(BufferedReader r = new BufferedReader(new InputStreamReader(isid, "ISO-8859-1"))) { id = r.readLine(); } }
Just to be clear I mean pipeline scenario like this:
stage('Build') { agent { dockerfile { filename 'Dockerfile' dir 'deployment' additionalBuildArgs '--target base' } } steps { sh "echo TEST123" } }
where Dockerfile is multi-stage and contains aliases like "FROM base AS prod"
If you want to submit a "better" PR then don't let me stop you, but I'd rather have a working plugin with workarounds than a non-working one until someone (whoever that is) submits a "proper" solution.
That being said, is this plugin even being maintained? Am I wasting my time commenting here and making a PR?
is this plugin even being maintained?
Not that I know of. IMO you should not use the docker DSL, nor the withDockerContainer step (including Declarative Pipeline’s agent {docker …} and agent {dockerfile …}), and at most use the withDockerRegistry and withDockerServer steps.
I've modified FromFingerprintStep, removed the Dockerfile parser and use docker inspect to walk up the image history up to the previous properly tagged image.
This solves the multistage issue for me.
https://github.com/jenkinsci/docker-workflow-plugin/pull/155
We are also having trouble because of this, since we have to rewrite all of our Dockerfiles to be compatible with jenkins.
Any news on this is appreciated.
I just setup Jenkins and ran into this issue as well, would love to see it resolved so I can utilize Jenkins otherwise this is a blocker for me.
Steven Weathers simply run sh 'docker build…' and do not use this feature.
Jesse Glick that's one thing I tried but makes things like deploying to registry more of a hassle. Overall I've since abandoned trying to use Jenkins and have gone with another CI that worked flawlessly from the start since it was docker oriented.
makes things like deploying to registry more of a hassle
For what it’s worth, my recommendation is
withDockerRegistry(url: 'https://docker.corp/', credentialsId: 'docker-creds') { sh 'sh build-and-push' }
with the script being something like
docker build -t docker.corp/x/y:$TAG . docker push docker.corp/x/y:$TAG
Also seeing this same issue, but interestingly enough, it works in 1 job but not another, doing almost the exact same thing.
Working code in question:
docker.build('mydockerimage', "--file ${DOCKERFILE} --pull --build-arg BUILD_NUMBER=${BUILD_NUMBER} .")
Code that doesn't work:
docker.build('mydockerimage', "--file ${myProperties.DOCKERFILE} --pull --build-arg BUILD_NUMBER=${params.BUILD_TO_DEPLOY} .")
Where "myProperties" is read from a properties file using "readProperties" from stage utils plugin. The docker image seems to be built fine in both cases, but in the latter, we see the error:
Successfully built 204ce2321dab Successfully tagged <redacted> [Pipeline] dockerFingerprintFrom [Pipeline] } [Pipeline] // withCredentials [Pipeline] } [Pipeline] // withDockerRegistry [Pipeline] } [Pipeline] // withEnv [Pipeline] } [Pipeline] // stage [Pipeline] } [Pipeline] // node [Pipeline] End of Pipeline java.io.IOException: Cannot retrieve .Id from 'docker inspect<redacted>'
We do use the multi-stage docker build with "AS name" for the docker stages in the starting Dockerfile being built.
There is a PR solving this here: https://github.com/jenkinsci/docker-workflow-plugin/pull/162
Could a maintainer take a look at it?
A fix for this issue was just released in Docker Pipeline plugin version 1.19. From the release notes:
Deprecate the dockerFingerprintFrom and dockerFingerprintRun steps and stop calling them during docker.build and image.run. Fixes various issues with Dockerfile parsing and parsing arguments to docker build.
One clue here that might be useful here: https://github.com/moby/moby/pull/33185/files
There is a "–target" option to docker build that can help. But if you target only the build part in your example
then, the child images (the other FROM) will be ignored.
The user need for the docker workflow plugin to express instructions to build in the docker-provided build environment are the same as this multi-stage build.
The plugin does not seems currently useful with multi stage: moving to a simple sh 'docker build -t image ./' should be enough there, outside the need for fingerprinting maybe ?
=> The big concern is "how to make jenkins access the intermediate container, like for publishing tests units or reports" ?
For today, I'm trying to parse the docker build output, and the use docker cp to get the files in the workspace, which is portable in term of UID, and less painful.