Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-70651

Allow retrieval of S3 paths for separate downloads

XMLWordPrintable

      We use Kubernetes to run our agents, per best practice our pods have a lightweight container that runs the Jenkins agent, and a larger container with our build time dependencies (aws cli, etc...). This is very cost efficient as we scale our EKS cluster nodes on demand through the day in AWS.

      One problem we encounter is that when we have large artifacts the Jenkins agent container gets OOM killed. This is because the large file causes disk cache to use memory and the k8s scheduler decides to kill the java process due to memory pressure (kubernetes is very sensitive to the memory pressure of the disk cache, we cannot tune this part). The current workaround is to allocate enough RAM to the Jenkins agent container, but this is very wasteful, especially for longer running stages.

      A more efficient workaround would be for us to retrieve the S3 urls of the artifacts and then have our larger build container download the artifacts, possibly even only if needed.

      Is this possible to do this at the groovy level? For example if we could run a `script { }` step to extract the s3 paths then we could save them to a text file on disk and have the aws cli with the larger amount of RAM do the download for us.

      The simplest for us would be if we could get ephemeral download URLs so we would not even need to expose AIM permissions to perform the downloads, but that's fine if it is not possible.

            Unassigned Unassigned
            sodul Stephane Odul
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

              Created:
              Updated: