When restarting a multi-branch pipeline job from a stage I noticed that the archived artifacts are not copied to the new pipeline run if the job name contains a forward slash. The archived artifacts are stored in S3 via the artifact manager s3 plugin. The s3 bucket that stores the artifacts has the / encoded as %2F and the artifact is successfully archived as I can see it in our AWS account.
This same pipeline (attached to the issue) works when I restart from a stage if the job name does not have a forward slash in it. The archived artifacts are copied to the new pipeline run and the pipeline continues successfully.
We'd like to be able to restart an older pipeline so that we can redeploy older builds in the event of a rollback.
I found this resolved issue that seems to be related to what I've experienced, but it seems to be specifically for manually downloading artifacts via the s3 http url.
I also see that the issue was resolved in the 1.2 release, we're running the latest version but still seeing the branch name issue.
To test this I created a pipeline, which is attached to this issue, and would do the following.
- Run a successful build that archives an artifact
- Start a new build by restarting from the first stage of the successful build above.
Restarting from an older pipeline worked when I ran it on jobs based on the branches named below.
The same steps failed with the "The specified key does not exist. " issue when running it on jobs based on the following branch names.
I'm guessing some form of hashing based on the branch name happens to do the artifact lookup in the S3 bucket and the forward slash causes some issue with that.