Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-55287

Pipeline: Failure to load flow node: FlowNode was not found in storage for head

    XMLWordPrintable

    Details

    • Type: Bug
    • Status: Open (View Workflow)
    • Priority: Minor
    • Resolution: Unresolved
    • Component/s: workflow-cps-plugin
    • Labels:
      None
    • Environment:
      Jenkins ver. 2.138.2 Pipeline: Groovy 2.61
    • Similar Issues:

      Description

      IMPORTANT: NOTE FROM A MAINTAINER:

      STOP! YOUR STACK TRACE ALONE IS NOT GOING TO HELP SOLVE THIS!

      (sorry to all caps but we're not going to make progress on this issue with commenters adding insufficient information)

      Note from maintainer: We'd like to be able to fix this, but we really need more information to do so. Please, whenever you encounter the error in the description of the ticket, zip the build folder ($JENKINS_HOME/jobs/$PATH_TO_JOB/builds/$BUILD_NUMBER/) of the build that failed and upload it here along with the Jenkins system logs, redacting any sensitive content as necessary, and include any relevant information on frequency of the issue, steps to reproduce (did it happen after Jenkins was restarted normally, or did Jenkins crash), any messages in the Jenkins system logs that seem relevant, etc. In addition, please check service or other system level logs for Jenkins to see if there are any issues with Jenkins taking too long to shut down or anything like that. Thanks!

      The main thing we are currently looking for is whether these messages are present in the Jenkins logs right before Jenkins shut down for the build which has the error:

      • About to try to checkpoint the program for buildCpsFlowExecutionOwner[YourJobName/BuildNumber:YourJobName #BuildNumber]]
      • Trying to save program before shutdown org.jenkinsci.plugins.workflow.cps.CpsFlowExecution$8@RandomHash
      • Finished saving program before shutdown org.jenkinsci.plugins.workflow.cps.CpsFlowExecution$8@RandomHash

      If these messages are not present, it means that Jenkins was unable to save the Pipeline, so the error is expected. If that is the case, fixing the issue probably requires changes to Jenkins packaging to configure longer service timeouts on shutdown, or totally changing how PERFORMANCE_OPTIMIZED works. If the messages are present, then something else is happening.

      Exception:

      Creating placeholder flownodes because failed loading originals.
      java.io.IOException: Tried to load head FlowNodes for execution Owner[Platform Service FBI Test/1605:Platform Service FBI Test #1605] but FlowNode was not found in storage for head id:FlowNodeId 1:17
       at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.initializeStorage(CpsFlowExecution.java:678)
       at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.onLoad(CpsFlowExecution.java:715)
       at org.jenkinsci.plugins.workflow.job.WorkflowRun.getExecution(WorkflowRun.java:659)
       at org.jenkinsci.plugins.workflow.job.WorkflowRun.onLoad(WorkflowRun.java:525)
       at hudson.model.RunMap.retrieve(RunMap.java:225)
       at hudson.model.RunMap.retrieve(RunMap.java:57)
       at jenkins.model.lazy.AbstractLazyLoadRunMap.load(AbstractLazyLoadRunMap.java:499)
       at jenkins.model.lazy.AbstractLazyLoadRunMap.load(AbstractLazyLoadRunMap.java:481)
       at jenkins.model.lazy.AbstractLazyLoadRunMap.getByNumber(AbstractLazyLoadRunMap.java:379)
       at hudson.model.RunMap.getById(RunMap.java:205)
       at org.jenkinsci.plugins.workflow.job.WorkflowRun$Owner.run(WorkflowRun.java:896)
       at org.jenkinsci.plugins.workflow.job.WorkflowRun$Owner.get(WorkflowRun.java:907)
       at org.jenkinsci.plugins.workflow.flow.FlowExecutionList$1.computeNext(FlowExecutionList.java:65)
       at org.jenkinsci.plugins.workflow.flow.FlowExecutionList$1.computeNext(FlowExecutionList.java:57)
       at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
       at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
       at org.jenkinsci.plugins.workflow.flow.FlowExecutionList$ItemListenerImpl.onLoaded(FlowExecutionList.java:178)
       at jenkins.model.Jenkins.<init>(Jenkins.java:975)
       at hudson.model.Hudson.<init>(Hudson.java:85)
       at hudson.model.Hudson.<init>(Hudson.java:81)
       at hudson.WebAppMain$3.run(WebAppMain.java:233)
      Finished: FAILURE
      

        Attachments

        1. 3219.zip
          398 kB
        2. 48064.tar.gz
          85 kB
        3. flowNodeStore.xml
          22 kB
        4. plugins_versions_2.190.1.txt
          5 kB

          Issue Links

            Activity

            Hide
            mhollingsworthcs Mark Hollingsworth added a comment -

             

             

            Devin Nusbaum so we definitely see jenkins.model.Jenkins.<init> as part of the trace, but I'm almost 100% certain that Jenkins did not startup (I would have been paged very early in the morning if it had). What I can tell you is that we have been reproducing this issue by terminating machines. 

            [2020-07-24T13:47:01.622Z] Cannot contact someNode: java.lang.InterruptedException
            Creating placeholder flownodes because failed loading originals.
            java.io.IOException: Tried to load head FlowNodes for execution Owner[someFolder/someJob/buildNumber:someFolder/someJob #buildNumber] but FlowNode was not found in storage for head id:FlowNodeId 1:1714
             at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.initializeStorage(CpsFlowExecution.java:679)
             at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.onLoad(CpsFlowExecution.java:716)
             at org.jenkinsci.plugins.workflow.job.WorkflowRun.getExecution(WorkflowRun.java:680)
             at org.jenkinsci.plugins.workflow.job.WorkflowRun.onLoad(WorkflowRun.java:539)
             at hudson.model.RunMap.retrieve(RunMap.java:225)
             at hudson.model.RunMap.retrieve(RunMap.java:57)
             at jenkins.model.lazy.AbstractLazyLoadRunMap.load(AbstractLazyLoadRunMap.java:501)
             at jenkins.model.lazy.AbstractLazyLoadRunMap.load(AbstractLazyLoadRunMap.java:483)
             at jenkins.model.lazy.AbstractLazyLoadRunMap.getByNumber(AbstractLazyLoadRunMap.java:381)
             at hudson.model.RunMap.getById(RunMap.java:205)
             at org.jenkinsci.plugins.workflow.job.WorkflowRun$Owner.run(WorkflowRun.java:929)
             at org.jenkinsci.plugins.workflow.job.WorkflowRun$Owner.get(WorkflowRun.java:940)
             at org.jenkinsci.plugins.workflow.flow.FlowExecutionList$1.computeNext(FlowExecutionList.java:65)
             at org.jenkinsci.plugins.workflow.flow.FlowExecutionList$1.computeNext(FlowExecutionList.java:57)
             at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
             at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
             at org.jenkinsci.plugins.workflow.flow.FlowExecutionList$ItemListenerImpl.onLoaded(FlowExecutionList.java:178)
             at jenkins.model.Jenkins.<init>(Jenkins.java:1017)
             at hudson.model.Hudson.<init>(Hudson.java:85)
             at hudson.model.Hudson.<init>(Hudson.java:81)
             at hudson.WebAppMain$3.run(WebAppMain.java:262)

            What I can tell you is that we have been able to reproduce this issue by terminating agent machines too early.

            In this case our infrastructure killed the agent in job1 before the job finished completely and we repro the issue.

            Job2 is the following job that did not get cut off early. 

             

            Job 1
            16:23:36 [WS-CLEANUP] Deleting project workspace...
            16:23:36 [WS-CLEANUP] Deferred wipeout is used...
            16:23:36 [WS-CLEANUP] done
            [Pipeline] }
            [Pipeline] // node
            [Pipeline] }
            [Pipeline] // stage
            [Pipeline] }
            [Pipeline] // node
            [Pipeline] }
            ----------------------------
            Job 2
            16:16:42 [WS-CLEANUP] Deleting project workspace...
            16:16:42 [WS-CLEANUP] Deferred wipeout is used...
            16:16:42 [WS-CLEANUP] done
            [Pipeline] }
            [Pipeline] // node
            [Pipeline] }
            [Pipeline] // stage
            [Pipeline] }
            [Pipeline] // node
            [Pipeline] }
            [Pipeline] // parallel
            [Pipeline] }
            [Pipeline] // script
            [Pipeline] }
            [Pipeline] // stage
            [Pipeline] stage
            [Pipeline] { (Declarative: Post Actions)
            [Pipeline] node
            16:16:42 Running on a node
            [Pipeline] {
            [Pipeline] echo
            16:16:42 Reporting build status: UNSTABLE
            [Pipeline] notifyBitbucket
             notifying stuff
            [Pipeline] step
            16:16:44 Notifying Bitbucket
            16:16:44 Notified Bitbucket 
            [Pipeline] }
            [Pipeline] // node
            [Pipeline] }
            [Pipeline] // stage
            [Pipeline] }
            [Pipeline] // timestamps
            [Pipeline] }
            [Pipeline] // withEnv
            [Pipeline] End of Pipeline
            

            Hope that helps!

             

             

             

            Show
            mhollingsworthcs Mark Hollingsworth added a comment -     Devin Nusbaum  so we definitely see jenkins.model.Jenkins.<init> as part of the trace, but I'm almost 100% certain that Jenkins did not startup (I would have been paged very early in the morning if it had). What I can tell you is that we have been reproducing this issue by terminating machines.  [2020-07-24T13:47:01.622Z] Cannot contact someNode: java.lang.InterruptedException Creating placeholder flownodes because failed loading originals. java.io.IOException: Tried to load head FlowNodes for execution Owner[someFolder/someJob/buildNumber:someFolder/someJob #buildNumber] but FlowNode was not found in storage for head id:FlowNodeId 1:1714 at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.initializeStorage(CpsFlowExecution.java:679) at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.onLoad(CpsFlowExecution.java:716) at org.jenkinsci.plugins.workflow.job.WorkflowRun.getExecution(WorkflowRun.java:680) at org.jenkinsci.plugins.workflow.job.WorkflowRun.onLoad(WorkflowRun.java:539) at hudson.model.RunMap.retrieve(RunMap.java:225) at hudson.model.RunMap.retrieve(RunMap.java:57) at jenkins.model.lazy.AbstractLazyLoadRunMap.load(AbstractLazyLoadRunMap.java:501) at jenkins.model.lazy.AbstractLazyLoadRunMap.load(AbstractLazyLoadRunMap.java:483) at jenkins.model.lazy.AbstractLazyLoadRunMap.getByNumber(AbstractLazyLoadRunMap.java:381) at hudson.model.RunMap.getById(RunMap.java:205) at org.jenkinsci.plugins.workflow.job.WorkflowRun$Owner.run(WorkflowRun.java:929) at org.jenkinsci.plugins.workflow.job.WorkflowRun$Owner.get(WorkflowRun.java:940) at org.jenkinsci.plugins.workflow.flow.FlowExecutionList$1.computeNext(FlowExecutionList.java:65) at org.jenkinsci.plugins.workflow.flow.FlowExecutionList$1.computeNext(FlowExecutionList.java:57) at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) at org.jenkinsci.plugins.workflow.flow.FlowExecutionList$ItemListenerImpl.onLoaded(FlowExecutionList.java:178) at jenkins.model.Jenkins.<init>(Jenkins.java:1017) at hudson.model.Hudson.<init>(Hudson.java:85) at hudson.model.Hudson.<init>(Hudson.java:81) at hudson.WebAppMain$3.run(WebAppMain.java:262) What I can tell you is that we have been able to reproduce this issue by terminating agent machines too early. In this case our infrastructure killed the agent in job1 before the job finished completely and we repro the issue. Job2 is the following job that did not get cut off early.    Job 1 16:23:36 [WS-CLEANUP] Deleting project workspace... 16:23:36 [WS-CLEANUP] Deferred wipeout is used... 16:23:36 [WS-CLEANUP] done [Pipeline] } [Pipeline] // node [Pipeline] } [Pipeline] // stage [Pipeline] } [Pipeline] // node [Pipeline] } ---------------------------- Job 2 16:16:42 [WS-CLEANUP] Deleting project workspace... 16:16:42 [WS-CLEANUP] Deferred wipeout is used... 16:16:42 [WS-CLEANUP] done [Pipeline] } [Pipeline] // node [Pipeline] } [Pipeline] // stage [Pipeline] } [Pipeline] // node [Pipeline] } [Pipeline] // parallel [Pipeline] } [Pipeline] // script [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (Declarative: Post Actions) [Pipeline] node 16:16:42 Running on a node [Pipeline] { [Pipeline] echo 16:16:42 Reporting build status: UNSTABLE [Pipeline] notifyBitbucket notifying stuff [Pipeline] step 16:16:44 Notifying Bitbucket 16:16:44 Notified Bitbucket [Pipeline] } [Pipeline] // node [Pipeline] } [Pipeline] // stage [Pipeline] } [Pipeline] // timestamps [Pipeline] } [Pipeline] // withEnv [Pipeline] End of Pipeline Hope that helps!      
            Hide
            simon_sudler Simon Sudler added a comment -

            I found the reason for the FlowNode errors on my system:

            Sep  6 00:29:20 ship kernel: [16183195.286559] Out of memory: Kill process 27501 (java) score 122 or sacrifice child
            Sep  6 00:29:20 ship kernel: [16183195.292176] Killed process 27501 (java) total-vm:21995844kB, anon-rss:5537216kB, file-rss:0kB, shmem-rss:0kB
            Sep  6 00:29:20 ship kernel: [16183196.039063] oom_reaper: reaped process 27501 (java), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
            

            And it is very consistent. I checked the last 20 occurrences, every time the flow node error occurs, the OOM-killer does his work. Why the java process on the build-node requires around 20GB of memory is unclear, because the build itself does not have this huge memory requirements (maybe some other issue).

            Since the OOM kills the java process, there is no proper feedback from the FlowNode... maybe some sanity check if the build client process should do the trick and produce some helpful error message.

            I my case, more memory helped...

            Show
            simon_sudler Simon Sudler added a comment - I found the reason for the FlowNode errors on my system: Sep 6 00:29:20 ship kernel: [16183195.286559] Out of memory: Kill process 27501 (java) score 122 or sacrifice child Sep 6 00:29:20 ship kernel: [16183195.292176] Killed process 27501 (java) total-vm:21995844kB, anon-rss:5537216kB, file-rss:0kB, shmem-rss:0kB Sep 6 00:29:20 ship kernel: [16183196.039063] oom_reaper: reaped process 27501 (java), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB And it is very consistent. I checked the last 20 occurrences, every time the flow node error occurs, the OOM-killer does his work. Why the java process on the build-node requires around 20GB of memory is unclear, because the build itself does not have this huge memory requirements (maybe some other issue). Since the OOM kills the java process, there is no proper feedback from the FlowNode... maybe some sanity check if the build client process should do the trick and produce some helpful error message. I my case, more memory helped...
            Hide
            nishant_dani nishant dani added a comment - - edited

            I am able to consistently get this error. I have attached the jobs/yeticore/branches/master/builds/2/workflow-fallback/flowNodeStore.xml . My interpretation of this file could be wrong but it appears it it searching for flow node 49, but all the file has is flow nodes 51 and the 52.  This the file in workflow-fallback, the flow in workflow seems to have a single node with id 2.

            Totally blocked at this time, so willing to investigate further 

            Update -  I am running jenkins inside docker container and in Docker - Preferences - Resources - Memory , I increased memory from 2GB to 16GB (I restarted docker) and I was able to get a successful run.  Will keep monitoring.

             

             

             

             flowNodeStore.xml

            Show
            nishant_dani nishant dani added a comment - - edited I am able to consistently get this error. I have attached the jobs/yeticore/branches/master/builds/2/workflow-fallback/flowNodeStore.xml . My interpretation of this file could be wrong but it appears it it searching for flow node 49, but all the file has is flow nodes 51 and the 52.  This the file in workflow-fallback, the flow in workflow seems to have a single node with id 2. Totally blocked at this time, so willing to investigate further  Update -  I am running jenkins inside docker container and in Docker - Preferences - Resources - Memory , I increased memory from 2GB to 16GB (I restarted docker) and I was able to get a successful run.  Will keep monitoring.         flowNodeStore.xml
            Hide
            drulli Ulli Hafner added a comment - - edited

            On my side adding more memory to my docker container runtime worked as well.

            It would be helpful if this exception would be catched and a meaningful message would be shown to the users (low memory).

            Show
            drulli Ulli Hafner added a comment - - edited On my side adding more memory to my docker container runtime worked as well. It would be helpful if this exception would be catched and a meaningful message would be shown to the users (low memory).
            Hide
            mramonleon Ramon Leon added a comment -

            Lowering the priority as the behavior is expected and the fix is to improve the log message.

            Show
            mramonleon Ramon Leon added a comment - Lowering the priority as the behavior is expected and the fix is to improve the log message.

              People

              Assignee:
              Unassigned Unassigned
              Reporter:
              haorui658 Rui Hao
              Votes:
              36 Vote for this issue
              Watchers:
              50 Start watching this issue

                Dates

                Created:
                Updated: