• Icon: New Feature New Feature
    • Resolution: Unresolved
    • Icon: Minor Minor
    • s3-plugin
    • None

      The S3 plugin currently uploads a file with the key set equal to the base filename that is returned by the matcher.

      I would like to upload files with a key of the path that is returned by the matcher, i.e. if I have a matcher of 'el/**', and several files underneath, I would like to see the following keys in the S3 bucket:

      el/noarch/file1.txt
      el/some/path/file2.c
      el/some/other/path/file3.java
      el/afile
      el/readme

      and so on.

          [JENKINS-16025] Allow full path when uploading a file

          Code changed in jenkins
          User: Craig Kapp
          Path:
          src/main/java/hudson/plugins/s3/S3BucketPublisher.java
          src/main/java/hudson/plugins/s3/S3Profile.java
          http://jenkins-ci.org/commit/s3-plugin/24a3b88a7b9acb0532deabe686944e53747c49a9
          Log:
          JENKINS-16025 - Fixed issue where all files go to the root of the bucket. Now we apply the relative path inside the bucket.

          SCM/JIRA link daemon added a comment - Code changed in jenkins User: Craig Kapp Path: src/main/java/hudson/plugins/s3/S3BucketPublisher.java src/main/java/hudson/plugins/s3/S3Profile.java http://jenkins-ci.org/commit/s3-plugin/24a3b88a7b9acb0532deabe686944e53747c49a9 Log: JENKINS-16025 - Fixed issue where all files go to the root of the bucket. Now we apply the relative path inside the bucket.

          Code changed in jenkins
          User: Craig Kapp
          Path:
          src/main/java/hudson/plugins/s3/S3BucketPublisher.java
          src/main/java/hudson/plugins/s3/S3Profile.java
          http://jenkins-ci.org/commit/s3-plugin/148aa7d18d0ad99bb26423168a95010b45ce1a78
          Log:
          JENKINS-16025 (https://issues.jenkins-ci.org/browse/JENKINS-16025?page=com.atlassian.streams.streams-jira-plugin:activity-stream-issue-tab)
          Fixed issue where all files ended up at the root of the bucket. Now it applies the relative path to the bucket.

          SCM/JIRA link daemon added a comment - Code changed in jenkins User: Craig Kapp Path: src/main/java/hudson/plugins/s3/S3BucketPublisher.java src/main/java/hudson/plugins/s3/S3Profile.java http://jenkins-ci.org/commit/s3-plugin/148aa7d18d0ad99bb26423168a95010b45ce1a78 Log: JENKINS-16025 ( https://issues.jenkins-ci.org/browse/JENKINS-16025?page=com.atlassian.streams.streams-jira-plugin:activity-stream-issue-tab ) Fixed issue where all files ended up at the root of the bucket. Now it applies the relative path to the bucket.

          David Beer added a comment -

          Github pull request merged and tested.

          David Beer added a comment - Github pull request merged and tested.

          robmoore added a comment -

          We're seeing some issues related to this. With .5, our path (**/my-web-service.war, for example) uploaded the file my-web-service.war to the path specified. Now the full path to the file (my-web-service/target/) is being included. This seems like a fundamental change with a significant impact that isn't well documented. Ideally there would be a means to choose if the full path is desired or not.

          robmoore added a comment - We're seeing some issues related to this. With .5, our path (**/my-web-service.war, for example) uploaded the file my-web-service.war to the path specified. Now the full path to the file (my-web-service/target/) is being included. This seems like a fundamental change with a significant impact that isn't well documented. Ideally there would be a means to choose if the full path is desired or not.

          Joshua K added a comment - - edited

          Yeah, I think this patch is buggy. Not only should there be a "don't flatten" option and the default restored to the original behavior, but as-is, it doesn't appear to work. Here's my matcher:

          src/failed-gui-tests/*,src/build/logs/*.log,src/**/vncserver-*.out
          

          Here's what the UI says I uploaded:

          Publish artifacts to S3 Bucket bucket=my-bucket/my-project-3159_2014-06-12_01-55-55, file=test.all.javascript.log region=US_EAST_1, upload from slave=false managed=false
          Publish artifacts to S3 Bucket bucket=my-bucket/my-project-3159_2014-06-12_01-55-55, file=test.all.python.log region=US_EAST_1, upload from slave=false managed=false
          Publish artifacts to S3 Bucket bucket=my-bucket/my-project-3159_2014-06-12_01-55-55, file=test.report.log region=US_EAST_1, upload from slave=false managed=false
          Publish artifacts to S3 Bucket bucket=my-bucket/my-project-3159_2014-06-12_01-55-55, file=test.system.run.log region=US_EAST_1, upload from slave=false managed=false
          Publish artifacts to S3 Bucket bucket=my-bucket/my-project-3159_2014-06-12_01-55-55, file=test.unit.run.log region=US_EAST_1, upload from slave=false managed=false
          Publish artifacts to S3 Bucket bucket=my-bucket/my-project-3159_2014-06-12_01-55-55, file=unpack-and-run.log region=US_EAST_1, upload from slave=false managed=false
          Publish artifacts to S3 Bucket bucket=my-bucket/my-project-3159_2014-06-12_01-55-55, file=update.all.log region=US_EAST_1, upload from slave=false managed=falsePublish artifacts to S3 Bucket bucket=my-bucket/my-project-3159_2014-06-12_01-55-55, file=vncserver-:1.out region=US_EAST_1, upload from slave=false managed=false
          

          Here's what I end up with in the S3 bucket:

          2014-06-11 22:44:11      86740 -and-run.log
          2014-06-11 22:44:12       2946 .all.log
          2014-06-11 22:44:10        102 .start.log
          2014-06-11 22:44:11         38 .stop.log
          2014-06-11 22:44:10      35601 e.tests.log
          2014-06-11 22:44:10       2726 e.windows.all.log
          2014-06-11 22:44:11       1029 eport.log
          2014-06-11 22:44:10        279 junit.report.dirs.log
          2014-06-11 22:44:10         20 lease.files.log
          2014-06-11 22:44:11      59899 ll.javascript.log
          2014-06-11 22:44:11      12879 ll.python.log
          2014-06-11 22:44:11     208326 nit.run.log
          2014-06-11 22:44:10         37 stop.log
          2014-06-11 22:44:09      26294 upload.log
          2014-06-11 22:44:11       4448 ystem.run.log
          

          Joshua K added a comment - - edited Yeah, I think this patch is buggy. Not only should there be a "don't flatten" option and the default restored to the original behavior, but as-is, it doesn't appear to work. Here's my matcher: src/failed-gui-tests /*,src/build/logs/*.log,src/**/ vncserver-*.out Here's what the UI says I uploaded: Publish artifacts to S3 Bucket bucket=my-bucket/my-project-3159_2014-06-12_01-55-55, file=test.all.javascript.log region=US_EAST_1, upload from slave= false managed= false Publish artifacts to S3 Bucket bucket=my-bucket/my-project-3159_2014-06-12_01-55-55, file=test.all.python.log region=US_EAST_1, upload from slave= false managed= false Publish artifacts to S3 Bucket bucket=my-bucket/my-project-3159_2014-06-12_01-55-55, file=test.report.log region=US_EAST_1, upload from slave= false managed= false Publish artifacts to S3 Bucket bucket=my-bucket/my-project-3159_2014-06-12_01-55-55, file=test.system.run.log region=US_EAST_1, upload from slave= false managed= false Publish artifacts to S3 Bucket bucket=my-bucket/my-project-3159_2014-06-12_01-55-55, file=test.unit.run.log region=US_EAST_1, upload from slave= false managed= false Publish artifacts to S3 Bucket bucket=my-bucket/my-project-3159_2014-06-12_01-55-55, file=unpack-and-run.log region=US_EAST_1, upload from slave= false managed= false Publish artifacts to S3 Bucket bucket=my-bucket/my-project-3159_2014-06-12_01-55-55, file=update.all.log region=US_EAST_1, upload from slave= false managed=falsePublish artifacts to S3 Bucket bucket=my-bucket/my-project-3159_2014-06-12_01-55-55, file=vncserver-:1.out region=US_EAST_1, upload from slave= false managed= false Here's what I end up with in the S3 bucket: 2014-06-11 22:44:11 86740 -and-run.log 2014-06-11 22:44:12 2946 .all.log 2014-06-11 22:44:10 102 .start.log 2014-06-11 22:44:11 38 .stop.log 2014-06-11 22:44:10 35601 e.tests.log 2014-06-11 22:44:10 2726 e.windows.all.log 2014-06-11 22:44:11 1029 eport.log 2014-06-11 22:44:10 279 junit.report.dirs.log 2014-06-11 22:44:10 20 lease.files.log 2014-06-11 22:44:11 59899 ll.javascript.log 2014-06-11 22:44:11 12879 ll.python.log 2014-06-11 22:44:11 208326 nit.run.log 2014-06-11 22:44:10 37 stop.log 2014-06-11 22:44:09 26294 upload.log 2014-06-11 22:44:11 4448 ystem.run.log

          Not only that, it appears to also upload files incorrectly on Windows slave (regardless of if slave-managed upload is on). It'll take the directory structure and literally write the path to S3 with backslashes, which obviously yields an incorrect result.

          301 2014-06-12 17:39:54 Win32/build-slave-with-parameters/287/win\builds\props.txt

          So the new version has effectively broken uploads on Windows slaves.

          Giancarlo Martinez added a comment - Not only that, it appears to also upload files incorrectly on Windows slave (regardless of if slave-managed upload is on). It'll take the directory structure and literally write the path to S3 with backslashes, which obviously yields an incorrect result. 301 2014-06-12 17:39:54 Win32/build-slave-with-parameters/287/win\builds\props.txt So the new version has effectively broken uploads on Windows slaves.

          Ray H added a comment -

          IMO it seems that if we want to make this less 'magical', we should allow the user to specify a prefix path, i.e. if I have a file at foo/bar/baz/qux.zip.

          Then I'd specify:

          prefixPath = foo/bar
          search=baz/*

          so that when I upload to `myBucket`, it should uploaded to s3://myBucket/baz/qux.zip` whereas the current behavior: of specifying sourceFile = `foo/bar/baz/*` would upload it to the root directory.

          Ray H added a comment - IMO it seems that if we want to make this less 'magical', we should allow the user to specify a prefix path, i.e. if I have a file at foo/bar/baz/qux.zip. Then I'd specify: prefixPath = foo/bar search=baz/* so that when I upload to `myBucket`, it should uploaded to s3://myBucket/baz/qux.zip` whereas the current behavior: of specifying sourceFile = `foo/bar/baz/*` would upload it to the root directory.

          robmoore added a comment -

          Just noticed this change in version 7: https://github.com/jenkinsci/s3-plugin/commit/0a098fcaa9d42e53f04264b9da1a79770ecdad9e

          Curious if this addresses part of this ticket by offering the ability to 'flatten' the directory?

          From the docs:

          When enabled, Jenkins will ignore the directory structure of the artifacts in
          the source project and copy all matching artifacts directly into the specified
          bucket. By default the artifacts are copied in the same directory structure as
          the source project.

          robmoore added a comment - Just noticed this change in version 7: https://github.com/jenkinsci/s3-plugin/commit/0a098fcaa9d42e53f04264b9da1a79770ecdad9e Curious if this addresses part of this ticket by offering the ability to 'flatten' the directory? From the docs: When enabled, Jenkins will ignore the directory structure of the artifacts in the source project and copy all matching artifacts directly into the specified bucket. By default the artifacts are copied in the same directory structure as the source project.

          Is anyone planning on fixing this? I am seeing the same behaviour as Joshua K 12/June/14. If not, I could have a look at it...

          Justin Santa Barbara added a comment - Is anyone planning on fixing this? I am seeing the same behaviour as Joshua K 12/June/14. If not, I could have a look at it...

          Joshua K added a comment -

          Hey Justin, I found that if I use 'flatten directories' option in 0.7, this piece of code is skipped and it just uses the basename of the file, which is what i wanted anyway. You can add multiple upload configuration blocks if you need to put stuff in S3 sub-buckets.

          Joshua K added a comment - Hey Justin, I found that if I use 'flatten directories' option in 0.7, this piece of code is skipped and it just uses the basename of the file, which is what i wanted anyway. You can add multiple upload configuration blocks if you need to put stuff in S3 sub-buckets.

          Joshua K added a comment -

          ...Not to say that this shouldn't be fixed anyway, but i find this an effective workaround.

          Joshua K added a comment - ...Not to say that this shouldn't be fixed anyway, but i find this an effective workaround.

          It appears that the reported issues on Windows with \'s being passed literally is resolved by this outstanding pull request:
          https://github.com/jenkinsci/s3-plugin/pull/55

          Richard Brooks added a comment - It appears that the reported issues on Windows with \'s being passed literally is resolved by this outstanding pull request: https://github.com/jenkinsci/s3-plugin/pull/55

          Jeff Grimmett added a comment -

          I'm seeing the same issue as Joshua K with regards to truncated file names. Attempting the "flatten directories" option to see if that will see us through, and hoping our build planners don't try to get too clever with file patterns O.o

          Jeff Grimmett added a comment - I'm seeing the same issue as Joshua K with regards to truncated file names. Attempting the "flatten directories" option to see if that will see us through, and hoping our build planners don't try to get too clever with file patterns O.o

          Joshua Spence added a comment -

          +1 on this issue.

          Joshua Spence added a comment - +1 on this issue.

          Kevin R. added a comment -

          for the love of god, bump.

          Kevin R. added a comment - for the love of god, bump.

          For the folks who replied after me or anyone else that comes across this issue, I've been very happy with just using the AWS CLI tool for uploading my artifacts as a post-build step. I have greater faith in Amazon maintaining their own command line tool than some third-party Jenkins plugin. However, be aware that it does introduce a dependency of the command line tool being installed on your slaves as well as the inject-passwords-as-environment-variables plugin (or however else you'd like to provision the command line tool with credentials).

          Here's a gist of what I'm doing to upload the entire contents of a `./artifacts` directory in my job's workspace while preserving the hierarchy of any files within that directory:
          https://gist.github.com/richard-bt/f35723140043fada2a3416d4f5b76b81

          (of course, modify the scripts to artifact files according to your needs - but I prefer to put the logic for how things should look in scripts that arrange the contents of my `./artifacts` directory prior to my post-build artifact step)

          After using Jenkins for a while I've adopted a preference for minimizing the number of dependencies I have on Jenkins plugins for doing things I could otherwise accomplish with a shell script - especially in having to support jobs running on various operating systems.

          Richard Brooks added a comment - For the folks who replied after me or anyone else that comes across this issue, I've been very happy with just using the AWS CLI tool for uploading my artifacts as a post-build step. I have greater faith in Amazon maintaining their own command line tool than some third-party Jenkins plugin. However, be aware that it does introduce a dependency of the command line tool being installed on your slaves as well as the inject-passwords-as-environment-variables plugin (or however else you'd like to provision the command line tool with credentials). Here's a gist of what I'm doing to upload the entire contents of a `./artifacts` directory in my job's workspace while preserving the hierarchy of any files within that directory: https://gist.github.com/richard-bt/f35723140043fada2a3416d4f5b76b81 (of course, modify the scripts to artifact files according to your needs - but I prefer to put the logic for how things should look in scripts that arrange the contents of my `./artifacts` directory prior to my post-build artifact step) After using Jenkins for a while I've adopted a preference for minimizing the number of dependencies I have on Jenkins plugins for doing things I could otherwise accomplish with a shell script - especially in having to support jobs running on various operating systems.

            dmbeer David Beer
            wizard113 wizard113
            Votes:
            8 Vote for this issue
            Watchers:
            17 Start watching this issue

              Created:
              Updated: