-
Bug
-
Resolution: Unresolved
-
Major
-
None
-
Jenkins 2.293
S3 Publisher plugin 0.11.7
CloudBees Docker Custom Build Environment Plugin 1.7.3
Running a Freestyle Jenkins job, which runs in a docker container, which runs on a slave node, at AWS, on Ubuntu 18.04.
Hi,
I have a jenkins job that generates numerous files. Of those many files, around 8GB will be uploaded to S3 using the S3 Publisher plugin.
Often, it works ok. Many other jobs with a very similar configuration are succeeding.
However, lately the following error appears sporadically.
com.amazonaws.AmazonClientException: Unable to complete transfer: Java heap space Caused by: java.lang.OutOfMemoryError: Java heap space
These errors appear during the S3 upload phase, and not at other times. Thus, they seem to be coming from the S3 publisher plugin.
I have attempted to increase the jenkins heap size:
JAVA_ARGS="-Djava.awt.headless=true -Xmx6000m -Xms6000m"
It doesn't always solve the problem.
Would uploading 8GB of files to S3 require a heap size of 8GB?
Usually on Linux, you should be able to rsync, scp or cp a much larger amount of files than the size of your physical memory. For example, you could copy 1TB of data, even though your memory size is only 2GB.
For the S3 Publisher plugin, what is the correlation between the files being uploaded, and the required memory?
Could the S3 Publisher plugin be optimized to not require as much memory?