Details
-
Bug
-
Status: Resolved (View Workflow)
-
Minor
-
Resolution: Fixed
-
None
-
jenkins 2.111,
workflow-durable-task-step 2.19
kubernetes-1.6.0
Description
We have recently updated our Jenkins installation from 2.101 to 2.111 including every plugin related to pipeline.
Since this update every shell «sh» invocation is really slower than before. shell invocation was taking few millisecond before. it is now taking seconds.
So job that were taking 1:30 minutes or taking up to 25:00 minutes.
We are trying to figure out which plugin is related
Attachments
Issue Links
- links to
Activity
Field | Original Value | New Value |
---|---|---|
Component/s | pipeline [ 21692 ] |
Component/s | pipeline [ 21692 ] |
Attachment | masterThread_Dump.log [ 42310 ] |
Environment |
jenkins 2.111, workflow-durable-task-step 2.19 |
jenkins 2.111, workflow-durable-task-step 2.19 kubernetes-1.6.0 |
Component/s | kubernetes-plugin [ 20639 ] | |
Component/s | workflow-durable-task-step-plugin [ 21715 ] |
Attachment | Screen.png [ 42683 ] |
Attachment | Screen Shot 2018-05-28 at 17.36.56.png [ 42701 ] |
Attachment | Screen Shot 2018-05-28 at 17.35.26.png [ 42702 ] |
Attachment | Screen Shot 2018-07-27 at 16.19.25.png [ 43528 ] |
Attachment | clabu609-k8s-steps.png [ 45923 ] |
Attachment | clabu609-docker-steps.png [ 45924 ] |
Comment |
[ We have isolated the cause of the slowdown to the 1s wait in java.io.PipedOutputStream's write() method. The symptom being that whenever the process tries to write to the buffer (namely all the export environment variable statements), a number of the write() calls will block for 1+ seconds due to the buffer being full. Our solution was to, instead of the main thread writing, have the main thread delegate the write calls to asynchronous writer threads (with each one being in charge of writing an export statement to the buffer), then ensuring all the writer threads have finished at the end. This dramatically reduced our overhead time of `sh` calls from 3-4 seconds down to less than 1 second. We're currently in the process of refining it and then will submit a formal PR with our changes, but if there are any comments or suggestions please let us know. Also a note, we noticed that the slow `sh` behavior only occurred when called within a `container` block, not when the `sh` calls were simply using the default container. However, even using the same container as the default container produced slow `sh` calls. Example: ``` pipeline { agent { kubernetes { label "pod-name" defaultContainer "jnlp" yaml """ apiVersion: v1 kind: Pod spec: containers: - name: jnlp ... """ } } stages { stage("Loop in Default") { steps { script { for (i = 0; i < 10; i++) { sh "which jq" } } } } } stages { stage("Loop in JNLP") { steps { container("jnlp") { script { for (i = 0; i < 10; i++) { sh "which jq" } } } } } } } ``` ] |
Assignee | Carlos Sanchez [ csanchez ] |
Status | Open [ 1 ] | In Progress [ 3 ] |
Status | In Progress [ 3 ] | In Review [ 10005 ] |
Remote Link | This issue links to "PR-427 (Web Link)" [ 22374 ] |
Resolution | Fixed [ 1 ] | |
Status | In Review [ 10005 ] | Resolved [ 5 ] |