We use a pipeline of 40+ VMs to run our tests. Our goal is to find the total time it takes for all the VMs to run all of our tests - this is all available through the flowGraphTable. But the flowGraphTable is not easily available to obtain through the API, so we use api/json?depth=2 to find all of the nodes that have ran in each build. We use their link and add the endpoint "wfapi/describe" (code below) to retrieve the time it took for that node to run. However, there are two major issues that arise by using that workaround
- It takes a few minutes to parse through all the nodes and make each request
- Many nodes lead to a JSON file that is null - no data is part of the response
The 2nd point is bold as that is our major roadblock. The nodes that should be providing us the most useful information are giving us a response of null.
We have worked with adding the endpoint "wfapi/describe" after the build number. We iterate through all of the stages and go through their individual responses. The issue here, however, is that only first 100 tests, including their time, are shown - everything else is not there. For us, this is the most valuable information as it allows us to see how long each VM took to run tests.
I am proposing three new added keys and values to the endpoint "api/json?depth=2" as shown below:
- "durationMillis" is the time it took for the rest to run - in milliseconds
- "startTimeMillis" is the time that process started - in UNIX milliseconds
- "queueDurationMillis" is the time the process was waiting in queue before being ran - in milliseconds
- "pauseDurationMillis" is the time that the process paused for before running again - in milliseconds