• Icon: Bug Bug
    • Resolution: Fixed
    • Icon: Critical Critical
    • workflow-api-plugin
    • workflow-api-plugin v2.33
      Jenkins v2.150.3
    • workflow-api 1239.vd7c497375cb_f

      Some jobs get stuck in a state where they're unable to continue to write data to the console log.  The system log contains multiple entries of the following form until the job ends:

       

      failed to flush /net/users/jenkins/configuration/jobs/XXX/builds/5/log
      java.io.IOException: Stream Closed
      at java.io.FileOutputStream.writeBytes(Native Method)
      at java.io.FileOutputStream.write(FileOutputStream.java:326)
      at org.jenkinsci.plugins.workflow.log.DelayBufferedOutputStream$FlushControlledOutputStream.write(DelayBufferedOutputStream.java:134)
      at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
      at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
      at java.io.FilterOutputStream.flush(FilterOutputStream.java:140)
      at org.jenkinsci.plugins.workflow.log.FileLogStorage.maybeFlush(FileLogStorage.java:190)
      at org.jenkinsci.plugins.workflow.log.FileLogStorage.overallLog(FileLogStorage.java:198)
      at org.jenkinsci.plugins.workflow.job.WorkflowRun.getLogText(WorkflowRun.java:1018)
      at org.jenkinsci.plugins.workflow.job.WorkflowRun.getLogInputStream(WorkflowRun.java:1026)
      at org.jenkinsci.plugins.gwt.GenericWebhookEnvironmentContributor.notLogged(GenericWebhookEnvironmentContributor.java:54)
      at org.jenkinsci.plugins.gwt.GenericWebhookEnvironmentContributor.buildEnvironmentFor(GenericWebhookEnvironmentContributor.java:30)
      at hudson.model.Run.getEnvironment(Run.java:2373)
      at org.jenkinsci.plugins.workflow.job.WorkflowRun.getEnvironment(WorkflowRun.java:468)
      at org.jenkinsci.plugins.workflow.cps.EnvActionImpl.getEnvironment(EnvActionImpl.java:86)
      at org.jenkinsci.plugins.workflow.cps.EnvActionImpl.getEnvironment(EnvActionImpl.java:67)
      at org.jenkinsci.plugins.workflow.support.DefaultStepContext.get(DefaultStepContext.java:72)
      at org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep$Execution.start(DurableTaskStep.java:305)
      at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:268)
      at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:176)
      at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:122)
      at sun.reflect.GeneratedMethodAccessor387.invoke(Unknown Source)
      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      at java.lang.reflect.Method.invoke(Method.java:498)
      at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93)
      at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325)
      at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1213)
      at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1022)
      at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:42)
      at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
      at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
      at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:157)
      at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:23)
      at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:155)
      at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:155)
      at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:159)
      at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:129)
      at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:17)
      at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:57)
      at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:109)
      at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:82)
      at sun.reflect.GeneratedMethodAccessor384.invoke(Unknown Source)
      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      at java.lang.reflect.Method.invoke(Method.java:498)
      at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
      at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:103)
      at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:82)
      at sun.reflect.GeneratedMethodAccessor384.invoke(Unknown Source)
      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      at java.lang.reflect.Method.invoke(Method.java:498)
      at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
      at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:60)
      at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:109)
      at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:82)
      at sun.reflect.GeneratedMethodAccessor384.invoke(Unknown Source)
      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      at java.lang.reflect.Method.invoke(Method.java:498)
      at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
      at com.cloudbees.groovy.cps.impl.ConstantBlock.eval(ConstantBlock.java:21)
      at com.cloudbees.groovy.cps.Next.step(Next.java:83)
      at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:174)
      at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:163)
      at org.codehaus.groovy.runtime.GroovyCategorySupport$ThreadCategoryInfo.use(GroovyCategorySupport.java:129)
      at org.codehaus.groovy.runtime.GroovyCategorySupport.use(GroovyCategorySupport.java:268)
      at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:163)
      at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$101(SandboxContinuable.java:34)
      at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.lambda$run0$0(SandboxContinuable.java:59)
      at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.GroovySandbox.runInSandbox(GroovySandbox.java:121)
      at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:58)
      at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:182)
      at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:332)
      at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$200(CpsThreadGroup.java:83)
      at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:244)
      at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:232)
      at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:64)
      at java.util.concurrent.FutureTask.run(FutureTask.java:266)
      at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:131)
      at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
      at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:59)
      at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
      at java.util.concurrent.FutureTask.run(FutureTask.java:266)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
      at java.lang.Thread.run(Thread.java:748)

       

      When enabling DelayBufferedOutputStream logging, we also see this in the log:

      null
      java.io.IOException: Stream Closed
      at java.io.FileOutputStream.writeBytes(Native Method)
      at java.io.FileOutputStream.write(FileOutputStream.java:326)
      at org.jenkinsci.plugins.workflow.log.DelayBufferedOutputStream$FlushControlledOutputStream.write(DelayBufferedOutputStream.java:134)
      at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
      at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
      at org.jenkinsci.plugins.workflow.log.DelayBufferedOutputStream.flushBuffer(DelayBufferedOutputStream.java:82)
      at org.jenkinsci.plugins.workflow.log.DelayBufferedOutputStream.flushAndReschedule(DelayBufferedOutputStream.java:91)
      at org.jenkinsci.plugins.workflow.log.DelayBufferedOutputStream$Flush.run(DelayBufferedOutputStream.java:114)
      at jenkins.security.ImpersonatingScheduledExecutorService$1.run(ImpersonatingScheduledExecutorService.java:58)
      at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
      at java.util.concurrent.FutureTask.run(FutureTask.java:266)
      at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
      at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
      at java.lang.Thread.run(Thread.java:748)

          [JENKINS-56446] IOException in FileLogStorage maybeFlush

          I did use a FOS that print the thread dump on close(), see https://github.com/jenkinsci/workflow-api-plugin/compare/master...Dohbedoh:workflow-api-plugin:JENKINS-56446, and there is no call to close() while the issue happens. Eventually when the build completes, we see the call as expected:

          java.lang.Throwable: JENKINS-56446
          	at org.jenkinsci.plugins.workflow.log.FileLogStorage$Jenkins56446FilterOutputStream.close(FileLogStorage.java:353)
          	at java.base/java.io.FilterOutputStream.close(FilterOutputStream.java:188)
          	at java.base/java.io.FilterOutputStream.close(FilterOutputStream.java:191)
          	at java.base/java.io.FilterOutputStream.close(FilterOutputStream.java:191)
          	at org.jenkinsci.plugins.workflow.log.FileLogStorage$IndexOutputStream.close(FileLogStorage.java:181)
          	at java.base/java.io.PrintStream.close(PrintStream.java:439)
          	at org.jenkinsci.plugins.workflow.log.BufferedBuildListener.close(BufferedBuildListener.java:60)
          	at org.jenkinsci.plugins.workflow.log.TaskListenerDecorator$CloseableTaskListener.close(TaskListenerDecorator.java:300)
          	at org.jenkinsci.plugins.workflow.job.WorkflowRun.finish(WorkflowRun.java:649)
          	at org.jenkinsci.plugins.workflow.job.WorkflowRun$GraphL.onNewHead(WorkflowRun.java:1065)
          	at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.notifyListeners(CpsFlowExecution.java:1587)
          	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$3.run(CpsThreadGroup.java:509)
          	at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$1.run(CpsVmExecutorService.java:38)
          	at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:139)
          	at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
          	at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:68)
          	at jenkins.util.ErrorLoggingExecutorService.lambda$wrap$0(ErrorLoggingExecutorService.java:51)
          	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
          	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
          	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
          	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
          	at java.base/java.lang.Thread.run(Thread.java:829)
          

          Allan BURDAJEWICZ added a comment - I did use a FOS that print the thread dump on close() , see https://github.com/jenkinsci/workflow-api-plugin/compare/master...Dohbedoh:workflow-api-plugin:JENKINS-56446 , and there is no call to close() while the issue happens. Eventually when the build completes, we see the call as expected: java.lang.Throwable: JENKINS-56446 at org.jenkinsci.plugins.workflow.log.FileLogStorage$Jenkins56446FilterOutputStream.close(FileLogStorage.java:353) at java.base/java.io.FilterOutputStream.close(FilterOutputStream.java:188) at java.base/java.io.FilterOutputStream.close(FilterOutputStream.java:191) at java.base/java.io.FilterOutputStream.close(FilterOutputStream.java:191) at org.jenkinsci.plugins.workflow.log.FileLogStorage$IndexOutputStream.close(FileLogStorage.java:181) at java.base/java.io.PrintStream.close(PrintStream.java:439) at org.jenkinsci.plugins.workflow.log.BufferedBuildListener.close(BufferedBuildListener.java:60) at org.jenkinsci.plugins.workflow.log.TaskListenerDecorator$CloseableTaskListener.close(TaskListenerDecorator.java:300) at org.jenkinsci.plugins.workflow.job.WorkflowRun.finish(WorkflowRun.java:649) at org.jenkinsci.plugins.workflow.job.WorkflowRun$GraphL.onNewHead(WorkflowRun.java:1065) at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.notifyListeners(CpsFlowExecution.java:1587) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$3.run(CpsThreadGroup.java:509) at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$1.run(CpsVmExecutorService.java:38) at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:139) at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28) at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:68) at jenkins.util.ErrorLoggingExecutorService.lambda$wrap$0(ErrorLoggingExecutorService.java:51) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang. Thread .run( Thread .java:829)

          Jesse Glick added a comment -

          Then I have no hypothesis as to what is wrong.

          Jesse Glick added a comment - Then I have no hypothesis as to what is wrong.

          What I see when the log are not appended is that the os.getChannel().isOpen() is false and os.getFD().valid() also is false. Just I can't seem to capture the source of the closure... Will test further.

          Allan BURDAJEWICZ added a comment - What I see when the log are not appended is that the os.getChannel().isOpen() is false and os.getFD().valid() also is false . Just I can't seem to capture the source of the closure... Will test further.

          Nicolas added a comment - - edited

          Hello,

          I'm also affected by this issue.

          To try to troubleshoot it, I created a small reproducer with this pipeline:

           

          pipeline {
              agent any
              stages {
                  stage('Init'){
                      steps{
                          deleteDir()
                      }
                  }
                  stage('Hello') {
                      matrix {
                          axes {
                              axis {
                                  name 'blarg'
                                  values 'foo', 'bar', 'baz', 'qux', 'wizz'
                              }
                          }
                          stages {
                              stage('Main') {
                                  steps {
                                      sh """
                                      for i in \$(seq 1 ${TURNS}); do
                                          echo hello world \${i}!
                                      done
                                      """
                                  }
                              }
                          }
                      }
                  }
              }
          }
          

          It requires one parameter, TURNS, which controls how many times the log message will be displayed.

          To reproduce the issue in a controlled environment, I spawned a jenkins instance in a docker container and only installed the pipeline plugin. The storage is a volume pointing to a local SSD drive mounted in ext4.

          I launched it with TURNS == 10 000 000 and jenkins started to misbehave around 2 493 402, but the numbers seem dependent on:

          • the number of plugins installed
          • the load (probably related to I/0)

          The symptoms are:

          • logs being truncated
          • job taking ages to finish
          • no possibility to kill the job properly, be it with cancel or using the /kill URL -> have to restart jenkins in the end.

          When I click on the job in jenkin, I have, in the console, a stack trace like that which shows up:

          2023-06-26 12:57:50.139+0000 [id=359]   WARNING o.j.p.w.log.FileLogStorage#maybeFlush: failed to flush /var/jenkins_home/jobs/reproducer/builds/3/log
          java.io.IOException: Stream Closed
                  at java.base/java.io.FileOutputStream.writeBytes(Native Method)
                  at java.base/java.io.FileOutputStream.write(FileOutputStream.java:354)
          [...] 

          and frequent (~1Hz) messages like that:

          o.j.p.w.s.concurrent.Timeout#lambda$ping$0: org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep [#71]: checking /var/jenkins_home/workspace/reproducer on  unresponsive for 33 min
          

          Don't hesitate to tell if I can provide more valuable information.

           

          [Edit] The jenkins version is 2.401.1, the latest docker image's version ATTOW.

          Nicolas added a comment - - edited Hello, I'm also affected by this issue. To try to troubleshoot it, I created a small reproducer with this pipeline:   pipeline {     agent any     stages {         stage( 'Init' ){             steps{                 deleteDir()             }         }         stage( 'Hello' ) {             matrix {                 axes {                     axis {                         name 'blarg'                         values 'foo' , 'bar' , 'baz' , 'qux' , 'wizz'                     }                 }                 stages {                     stage( 'Main' ) {                         steps {                             sh """                             for i in \$(seq 1 ${TURNS}); do                                 echo hello world \${i}!                             done                             """                         }                     }                 }             }         }     } } It requires one parameter, TURNS, which controls how many times the log message will be displayed. To reproduce the issue in a controlled environment, I spawned a jenkins instance in a docker container and only installed the pipeline plugin. The storage is a volume pointing to a local SSD drive mounted in ext4. I launched it with TURNS == 10 000 000 and jenkins started to misbehave around 2 493 402, but the numbers seem dependent on: the number of plugins installed the load (probably related to I/0) The symptoms are: logs being truncated job taking ages to finish no possibility to kill the job properly, be it with cancel or using the /kill URL -> have to restart jenkins in the end. When I click on the job in jenkin, I have, in the console, a stack trace like that which shows up: 2023-06-26 12:57:50.139+0000 [id=359]   WARNING o.j.p.w.log.FileLogStorage#maybeFlush: failed to flush / var /jenkins_home/jobs/reproducer/builds/3/log java.io.IOException: Stream Closed         at java.base/java.io.FileOutputStream.writeBytes(Native Method)         at java.base/java.io.FileOutputStream.write(FileOutputStream.java:354) [...] and frequent (~1Hz) messages like that: o.j.p.w.s.concurrent.Timeout#lambda$ping$0: org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep [#71]: checking / var /jenkins_home/workspace/reproducer on  unresponsive for 33 min Don't hesitate to tell if I can provide more valuable information.   [Edit] The jenkins version is 2.401.1, the latest docker image's version ATTOW.

          Nicolas added a comment -

          I just tested with a Matrix job instead of a pipeline one and jenkins is able to cope with 10 000 000 (x5 axis) log lines without an issue.

          So it looks like the issue is really in the pipeline plugin, or one of its dependencies.

          Nicolas added a comment - I just tested with a Matrix job instead of a pipeline one and jenkins is able to cope with 10 000 000 (x5 axis) log lines without an issue. So it looks like the issue is really in the pipeline plugin, or one of its dependencies.

          Jesse Glick added a comment -

          https://github.com/openjdk/jdk/blob/63f32fbe9771b8200f707ed5d1d0e6555ad90f8b/src/java.base/share/native/libjava/io_util.c#L104-L106 suggests that there could be various OS-level problems causing this state, rather than an actual premature call to FileOutputStream.close.

          maybeFlush already catches and logs the error, so I suspect this is just a miscellaneous symptom of the real problem.

          Most of the reports come from people doing something apparently retrieves full build log text from inside the pipeline itself—though apparently not the most recent, from ncarrier, whose maybeFlush is I think triggered by the normal GUI build log display.

          The unresponsive warnings are from org.jenkinsci.plugins.workflow.support.concurrent.Timeout as called from DurableTaskStep.Execution.check and generally indicate that there is excessive load on the agent, the controller, or the channel between the two which would prevent the sh step from reliably gathering new output and detecting when it should finish. Setting -Dorg.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep.USE_WATCHING=true (JENKINS-52165) may improve performance. Whether it would also cause this error to disappear, I cannot say.

          Jesse Glick added a comment - https://github.com/openjdk/jdk/blob/63f32fbe9771b8200f707ed5d1d0e6555ad90f8b/src/java.base/share/native/libjava/io_util.c#L104-L106 suggests that there could be various OS-level problems causing this state, rather than an actual premature call to FileOutputStream.close . maybeFlush already catches and logs the error, so I suspect this is just a miscellaneous symptom of the real problem. Most of the reports come from people doing something apparently retrieves full build log text from inside the pipeline itself—though apparently not the most recent, from ncarrier , whose maybeFlush is I think triggered by the normal GUI build log display. The unresponsive warnings are from org.jenkinsci.plugins.workflow.support.concurrent.Timeout as called from DurableTaskStep.Execution.check and generally indicate that there is excessive load on the agent, the controller, or the channel between the two which would prevent the sh step from reliably gathering new output and detecting when it should finish. Setting -Dorg.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep.USE_WATCHING=true ( JENKINS-52165 ) may improve performance. Whether it would also cause this error to disappear, I cannot say.

          Devin Nusbaum added a comment -

          I think I tracked down this issue. It happens if a thread is interrupted when writing to the log when a step transition happens. See https://github.com/jenkinsci/workflow-api-plugin/pull/296.

          Devin Nusbaum added a comment - I think I tracked down this issue. It happens if a thread is interrupted when writing to the log when a step transition happens. See https://github.com/jenkinsci/workflow-api-plugin/pull/296 .

          Thanks for tracking this down and fixing this. We have been running into the issue several times per month over the past year. Looking forward to testing and deploying this to our Jenkins servers.

          John Lengeling added a comment - Thanks for tracking this down and fixing this. We have been running into the issue several times per month over the past year. Looking forward to testing and deploying this to our Jenkins servers.

          Devin Nusbaum added a comment -

          A fix for this issue has been released in Pipeline: API plugin version 1239.vd7c497375cb_f.

          Devin Nusbaum added a comment - A fix for this issue has been released in Pipeline: API plugin version 1239.vd7c497375cb_f .

          Nicolas added a comment -

          Thank you very much, I'm installing the update now and I'll verify if things are ok now on my side.

          Nicolas added a comment - Thank you very much, I'm installing the update now and I'll verify if things are ok now on my side.

            dnusbaum Devin Nusbaum
            rshade Robert Shade
            Votes:
            11 Vote for this issue
            Watchers:
            24 Start watching this issue

              Created:
              Updated:
              Resolved: