Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-68087

Jenkins running in kubernetes is often OOMKilled

XMLWordPrintable

    • Icon: Improvement Improvement
    • Resolution: Unresolved
    • Icon: Critical Critical
    • core, helm-charts
    • Jenkins 2.249.3 running in kubernetes 1.16.6

      My jenkins was deployed by offical helm-chart, after running some jobs, Jenkins pod was OOMKilled by kubernetes and restarted automaticlly.

      Version of Helm and Kubernetes:

      Helm Version: v3.4.2

      Kubernetes Version: v1.16.6

      Which version of the chart: 3.4.0

      Values (relevant parts):

      controller:
        resources:
          requests:
            cpu: 2000m
            memory: 6Gi
        limits:
            cpu: 2000m
            memory: 6Gi
        javaOpts: >-
          -XX:InitialRAMPercentage=25.0 -XX:MaxRAMPercentage=75.0 -XX:MaxMetaspaceSize=256M -XX:MaxDirectMemorySize=256M
          -XX:+UseG1GC -XX:+UseStringDedeplication -XX:+AlwaysPreTouch
          -XX:+ParallelRefPricEnabled -XX:+DisableExplicitGC
          -XX:+UnlockDiagnosticVMOptions -XX:+UnlockExperimentalVMOptions

       

      Before Jenkins restarted, there is a few online node(agent provide by k8s) to run job, jvm heap is only about 60%, but RES of jenkins process is nearly 6Gi which I had limit in k8s pod, so obviously it leads to OOMKilled.

      I looked forward to the monitoring information and noticed that the memory usage of the pod had suddenly doubled at some point in the previous, but the overall Jenkins running job had not changed significantly.

      I tried optimizing the configuration parameters, but none solved the problem, How can I avoid Jenkins being OOMKilled by Kubernetes?

       

            Unassigned Unassigned
            jasperyue Tongshu
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

              Created:
              Updated: