I've got a multi-branch declarative pipeline build job that uses
to set up a container for the build. Multiple branches have an identical Dockerfile, but building an image produces different results, as the result also depends on other files in the repository.
We recently had a job that failed due to two branches building in parallel; after much head scratching we realised the same tag had been applied to both of the two docker images built, so one of the branches ended up being built inside a container based on the other branch's image. Digging in to the code, I can see that the image tag is currently based on the hash of the Dockerfile - I think this is flawed, as identical Dockerfiles do not necessarily produce the same output.
I can think of two other options:
- Use a tag based on the (full) job name
- Use a randomly generated tag
I believe Docker is already pretty clever about re-using cached intermediate images, so I don't think these changes will have a significant impact on subsequent image build times, even if a random tag is used.
|Field||Original Value||New Value|
|Status||Open [ 1 ]||In Progress [ 3 ]|
|Status||In Progress [ 3 ]||In Review [ 10005 ]|
|Remote Link||This issue links to "PR #247 (Web Link)" [ 20122 ]|
|Resolution||Fixed [ 1 ]|
|Status||In Review [ 10005 ]||Resolved [ 5 ]|
|Status||Resolved [ 5 ]||Closed [ 6 ]|