Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-47046

s3Upload with includePathPattern does not upload files

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Major Major
    • None
    • Jenkins 2.73.1
      pipeline-aws-plugin 1.15

      Thanks for releasing the 1.15 version with the includePathPattern option to s3Upload()!

      Unfortunately, it doesn't work for me - no files are uploaded to S3.

      See the following pipeline:

      node {
      sh """
        mkdir -p test test2
        echo foo > test/bar.txt
        echo foo > test2/baz.txt
      """
      s3Upload(bucket: bucket, path: 'test-pattern/', includePathPattern: '*/*.txt')
      s3Upload(bucket: bucket, path: 'test-filename/test/bar.txt', file: 'test/bar.txt')
      }

      Only the test-filename folder is created in the bucket, no test-pattern folder. The output is as follows:

      Started by user anonymous
      [Pipeline] node
      Running on ESC (sir-4cs867nq) in /home/ubuntu/workspace/Test
      [Pipeline] {
      [Pipeline] sh
      [Test] Running shell script
      + mkdir -p test test2
      + echo foo
      + echo foo
      [Pipeline] s3Upload
      Uploading */*.txt to s3://$bucket/test-pattern/ 
      Upload complete
      [Pipeline] s3Upload
      Uploading file:/home/ubuntu/workspace/Test/test/bar.txt to s3://$bucket/test-filename/test/bar.txt 
      Finished: Uploading to $bucket/test-filename/test/bar.txt
      Upload complete
      [Pipeline] }
      [Pipeline] // node
      [Pipeline] End of Pipeline
      Finished: SUCCESS

      I hope that I'm not to stupid for pattern matching  (the same happens with includePathPattern:'**/*').

          [JENKINS-47046] s3Upload with includePathPattern does not upload files

          Jacob Sohn added a comment -

          Yes, the s3Upload portion runs on build slaves.

          When used with file parameter, s3Upload on slave works as expected, however.

          Jacob Sohn added a comment - Yes, the s3Upload portion runs on build slaves. When used with file parameter, s3Upload on slave works as expected, however.

          Thanks for the info. This makes debugging a lot easier. So I can narrow it down to master/slave code.

          Thorsten Hoeger added a comment - Thanks for the info. This makes debugging a lot easier. So I can narrow it down to master/slave code.

          Yes, I can confirm that it does work on the master for me.

          Steffen Gebert added a comment - Yes, I can confirm that it does work on the master for me.

          Jacob Sohn added a comment -

          This seems to be related to https://issues.jenkins-ci.org/browse/JENKINS-44000 where aws identity is inherited from master and not from running instance's.

          Looking at S3 plugin https://wiki.jenkins.io/display/JENKINS/S3+Plugin, callable is used to stream from slave to master is used as a workaround to upload to s3 since plugin is executed only in master.

          I've also tried wrapping step with withAws using string values produced by slave nodes as credential parameters, however this doesn't seem to work as it has to apply to entire scope of pipeline.

          Any workaround or solution to this would be wonderful as only viable solution to this is to use shell block with awscli on slave nodes (making this or any aws plugin mute) or run everything on master with every security permission currently only assigned to slave nodes.

          Jacob Sohn added a comment - This seems to be related to https://issues.jenkins-ci.org/browse/JENKINS-44000  where aws identity is inherited from master and not from running instance's. Looking at S3 plugin https://wiki.jenkins.io/display/JENKINS/S3+Plugin , callable is used to stream from slave to master is used as a workaround to upload to s3 since plugin is executed only in master. I've also tried wrapping step with withAws using string values produced by slave nodes as credential parameters, however this doesn't seem to work as it has to apply to entire scope of pipeline. Any workaround or solution to this would be wonderful as only viable solution to this is to use shell block with awscli  on slave nodes (making this or any aws plugin mute) or run everything on master with every security permission currently only assigned to slave nodes.

          Sorry, still no progress here. I am setting up a Slave setup in the next days to replicate this issue.

          Thorsten Hoeger added a comment - Sorry, still no progress here. I am setting up a Slave setup in the next days to replicate this issue.

          Hi, I'm facing the same issue in a master/slave setup.

          Something peculiar I noticed while goofing around with the settings: if I set the path parameter to be an actual file name (instead of a directory), then the matched files do seem to be uploaded, but obviously they all have the provided file name.

          Jonas Van Nieuwenberg added a comment - Hi, I'm facing the same issue in a master/slave setup. Something peculiar I noticed while goofing around with the settings: if I set the path parameter to be an actual file name (instead of a directory), then the matched files do seem to be uploaded, but obviously they all have the provided file name.

          Sorin Sbarnea added a comment - - edited

          I can confirm this bug, mainly the `path` is fully ignored when include pathPattern is a pattern, making impossible to upload packages to specify target locations inside the buckets. 

          I am still trying to find a way to workaround this bug but so solution yet. Anyone?

          Sorin Sbarnea added a comment - - edited I can confirm this bug, mainly the `path` is fully ignored when include pathPattern is a pattern, making impossible to upload packages to specify target locations inside the buckets.  I am still trying to find a way to workaround this bug but so solution yet. Anyone?

          Oliver Schoenborn added a comment - - edited

          The only thing that works for me is
           

          s3Upload( 
            bucket: 'BUCKET', 
            path: "PATH_TO_FOLDER", // no trailing slash 
            file: "FOLDER", 
            workingDir: "PARENT_OF_FOLDER" 
          )

           
          With includePath, the only one that worked partially was using "*/.yaml" as path; other patterns like ".yaml", "/" did not work. I say partially because it uploaded only one file eventhough there were several (there is a bug related to this).

          Also findFiles is an option, as documented https://github.com/jenkinsci/pipeline-aws-plugin/issues/83.

          Oliver Schoenborn added a comment - - edited The only thing that works for me is   s3Upload( bucket: 'BUCKET' , path: "PATH_TO_FOLDER" , // no trailing slash file: "FOLDER" , workingDir: "PARENT_OF_FOLDER" )   With includePath, the only one that worked partially was using "* / .yaml" as path; other patterns like " .yaml", " / " did not work. I say partially because it uploaded only one file eventhough there were several (there is a bug related to this). Also findFiles is an option, as documented  https://github.com/jenkinsci/pipeline-aws-plugin/issues/83 .

          Rob White added a comment -

          Sad to see no update on this.

          I can confirm it's an issue for me and yes, only on my slaves.

          Rob White added a comment - Sad to see no update on this. I can confirm it's an issue for me and yes, only on my slaves.

          my workaround is uploading file by file

           

          pipeline {
            agent any
              stages {
                stage('Setting up environment variables'){
                  def AWS_ACCOUNT_ID = '01010101010101010'
                  def REGION = 'eu-north-1'
                  def ROLE = 'MyIamRole'
                  def EXTERNAL_ID = 'MyExternalId'
                  def BUCKET = 'my-artifacts'
                  def PROJECT = 'my-project'
                }
                stage ('Build app and upload artifacts to S3'){
                  agent {
                    label 'my-slave-with-maven'
                  }
                  steps {
                    // build source code
                    dir('./SourceCode') {
                      sh 'mvn -B clean package'            
                    }
                  }
                  script {
                    // upload files to S3
                    def jar_files = findFiles(glob: "**/SourceCode/${PROJECT}/target/*.jar")
                    jar_files.each {
                      echo "JAR found: ${it}"
                      withAWS(externalId: "${EXTERNAL_ID}", region: "${REGION}", role: "${ROLE}", roleAccount: "${AWS_ACCOUNT_ID}") {
                        s3Upload(file: "${it}", bucket: "${BUCKET}", path: "${PROJECT}/", acl: 'BucketOwnerFullControl')
                      }
                    }
                  }
                }
              }
            }
          }
          

          Demetrio Lopez added a comment - my workaround is uploading file by file   pipeline { agent any stages { stage( 'Setting up environment variables' ){ def AWS_ACCOUNT_ID = '01010101010101010' def REGION = 'eu-north-1' def ROLE = 'MyIamRole' def EXTERNAL_ID = 'MyExternalId' def BUCKET = 'my-artifacts' def PROJECT = 'my-project' } stage ( 'Build app and upload artifacts to S3' ){ agent { label 'my-slave-with-maven' } steps { // build source code dir( './SourceCode' ) { sh 'mvn -B clean package ' } } script { // upload files to S3 def jar_files = findFiles(glob: "**/SourceCode/${PROJECT}/target/*.jar" ) jar_files.each { echo "JAR found: ${it}" withAWS(externalId: "${EXTERNAL_ID}" , region: "${REGION}" , role: "${ROLE}" , roleAccount: "${AWS_ACCOUNT_ID}" ) { s3Upload(file: "${it}" , bucket: "${BUCKET}" , path: "${PROJECT}/" , acl: 'BucketOwnerFullControl' ) } } } } } } }

            hoegertn Thorsten Hoeger
            stephenking Steffen Gebert
            Votes:
            9 Vote for this issue
            Watchers:
            16 Start watching this issue

              Created:
              Updated: