For the folks who replied after me or anyone else that comes across this issue, I've been very happy with just using the AWS CLI tool for uploading my artifacts as a post-build step. I have greater faith in Amazon maintaining their own command line tool than some third-party Jenkins plugin. However, be aware that it does introduce a dependency of the command line tool being installed on your slaves as well as the inject-passwords-as-environment-variables plugin (or however else you'd like to provision the command line tool with credentials).
Here's a gist of what I'm doing to upload the entire contents of a `./artifacts` directory in my job's workspace while preserving the hierarchy of any files within that directory:
https://gist.github.com/richard-bt/f35723140043fada2a3416d4f5b76b81
(of course, modify the scripts to artifact files according to your needs - but I prefer to put the logic for how things should look in scripts that arrange the contents of my `./artifacts` directory prior to my post-build artifact step)
After using Jenkins for a while I've adopted a preference for minimizing the number of dependencies I have on Jenkins plugins for doing things I could otherwise accomplish with a shell script - especially in having to support jobs running on various operating systems.
IMO it seems that if we want to make this less 'magical', we should allow the user to specify a prefix path, i.e. if I have a file at foo/bar/baz/qux.zip.
Then I'd specify:
prefixPath = foo/bar
search=baz/*
so that when I upload to `myBucket`, it should uploaded to s3://myBucket/baz/qux.zip` whereas the current behavior: of specifying sourceFile = `foo/bar/baz/*` would upload it to the root directory.