Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-20046

Massive Jenkins slowdown when jobs in Queue (due to Queue.maintain())

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Fixed
    • Icon: Major Major
    • core
    • Ubuntu 12.04
      Jenkins 1.509.3
      Up-to-date plugins

      As soon as more than a handful builds get queued, the entire GUI crawls to a halt.

      The reason is that the executor thread running the "Queue.maintain()" method is holding the exclusive lock on the queue, but starts a very time-consuming loop on creating the list of applicable hosts matching a certain label.

      Due to this, every Jenkins GUI page and every method that needs access to the Queue gets delayed by ~30 seconds; with the delay rising the more builds are in the queue, due to Queue.maintain() being called more often.

      The server only becomes responsive again, once the entire queue is empty. Setting the server to "shutdown now" does not help.

      A usual stack trace when this occurs looks like this (first from /threadDump; the second from jstack during a different time):
      {{
      "Executor #6 for musxbird038" prio=10 tid=0x00007fe108024800 nid=0x7008 runnable [0x00007fe0f5a99000]
      java.lang.Thread.State: RUNNABLE
      at hudson.model.Slave.getLabelString(Slave.java:245)
      at hudson.model.Node.getAssignedLabels(Node.java:241)
      at hudson.model.Label.matches(Label.java:168)
      at hudson.model.Label.getNodes(Label.java:193)
      at hudson.model.Label.contains(Label.java:405)
      at hudson.model.Node.canTake(Node.java:322)
      at hudson.model.Queue$JobOffer.canTake(Queue.java:250)
      at hudson.model.Queue.maintain(Queue.java:1032)

      • locked <0x00000000e01d3490> (a hudson.model.Queue)
        at hudson.model.Queue.pop(Queue.java:863)
      • locked <0x00000000e01d3490> (a hudson.model.Queue)
        at hudson.model.Executor.grabJob(Executor.java:285)
        at hudson.model.Executor.run(Executor.java:206)
      • locked <0x00000000e01d3490> (a hudson.model.Queue)

      "Executor #0 for musxbird006" Id=591 Group=main RUNNABLE
      at java.util.TreeMap.successor(TreeMap.java:1975)
      at java.util.TreeMap$PrivateEntryIterator.nextEntry(TreeMap.java:1101)
      at java.util.TreeMap$KeyIterator.next(TreeMap.java:1154)
      at java.util.Collections$UnmodifiableCollection$1.next(Collections.java:1010)
      at hudson.model.Label$2.resolve(Label.java:159)
      at hudson.model.Label$2.resolve(Label.java:157)
      at hudson.model.labels.LabelAtom.matches(LabelAtom.java:149)
      at hudson.model.labels.LabelExpression$Binary.matches(LabelExpression.java:124)
      at hudson.model.Label.matches(Label.java:157)
      at hudson.model.Label.matches(Label.java:168)
      at hudson.model.Label.getNodes(Label.java:193)
      at hudson.model.Label.contains(Label.java:405)
      at hudson.model.Node.canTake(Node.java:322)
      at hudson.model.Queue$JobOffer.canTake(Queue.java:250)
      at hudson.model.Queue.maintain(Queue.java:1032)

      • locked hudson.model.Queue@2962c1e0
        at hudson.model.Queue.pop(Queue.java:863)
      • locked hudson.model.Queue@2962c1e0
        at hudson.model.Executor.grabJob(Executor.java:285)
        at hudson.model.Executor.run(Executor.java:206)
      • locked hudson.model.Queue@2962c1e0
        }}

      As you can see, the Queue.maintain() method does finish successfully, but needs more than 30 seconds for it. The server does not stop working and will return to normal once the queue has been fully processed.

      We have ~20 nodes with 12 executor slots each running (= 240 executor threads). There is an equal amount of jobs running, but not all of them consume CPU time on the host (most are idling and waiting for certain events).

      This issue has occurred since upgrading from 1.509.1 to 1.509.3.

      Thanks in advance.

            Unassigned Unassigned
            mhschroe Martin Schröder
            Votes:
            13 Vote for this issue
            Watchers:
            22 Start watching this issue

              Created:
              Updated:
              Resolved: