-
Improvement
-
Resolution: Unresolved
-
Minor
-
Artifact-Manager-S3-plugin Version
Looking at https://github.com/jenkinsci/artifact-manager-s3-plugin and the uploads appear to happen in series
for 10K, 5KB files upload time is around 20 mins, for a 100MB zip file it is a few second.
I’ve tried to fix it, but I’m not a Java developer or have any Jenkins plugin development experience , and have no idea what I’m doing.
The original line,
for (Map.Entry<String, URL> entry : artifactUrls.entrySet())
{ client.uploadFile(new File(f, entry.getKey()), contentTypes.get(entry.getKey()), entry.getValue(), listener); }
My change,
Map.Entry<String, URL> entry : artifactUrls.entrySet().parallelStream().forEach(
client.uploadFile(new File(f, entry.getKey()), contentTypes.get(entry.getKey()), entry.getValue(), listener));
As an example, teams I work with often set the Archive Artifacts include list to be `.`. They archive hundreds, and quite often, thousands of small files consisting of compiled assets; JavaScript, CSS, images, etc.
There is a often cited solution to pre-zip files into a single archive file before using the Archive Artifacts feature to store the artifact.
However, in practice, we find the extra step of pre-zipping files to be a point of friction/contention with our user base.
As AWS S3 allows a high rate of simultaneous uploads, it seems reasonable to avoid burdening our users with the added friction, while also leveraging S3's support for a large number of simultaneous uploads.