-
Bug
-
Resolution: Unresolved
-
Minor
-
None
Hi there,
We're using the Pipeline system extensively to build Python projects in our firm.
Whilst it is mostly amazing, one of the most screen-chuckingly frustrating things about it is when parallel jobs fail with no output other than 'error', eg:
[Pipeline] [Unit-ts2-el7-0] echo
10:38:15 [Unit-ts2-el7-0] xx2
[Pipeline] [Unit-ts2-el7-0] fileExists
[Pipeline] [Unit-ts2-el7-0] readFile
[Pipeline] }
[Pipeline] // dir
[Pipeline] fileExists
[Pipeline] fileExists
[Pipeline] error
No traceback, nothing in the server logs. No way to debug this other than commenting out bits of code, putting in print statements all over the place - not ideal!
It seems to happen the most when code in the shared workflowLibs has problems. For example - this code here:
def work() { node { sh "echo foo > foo.txt; echo bar >> foo.txt; echo baz >> foo.txt" return readFile('foo.txt').trim().split('\n').join(',') } } def jobs = [1: { echo work() }, 2: { echo work() } ] parallel jobs
.. will cause a script approval failure on staticMethod org.codehaus.groovy.runtime.DefaultGroovyMethods join java.lang.Object[] java.lang.String.
When the code is executed in a normal workflowScript, you get a nice traceback showing you the problem. If however the code is in the call() block of global variable file , it will just silently fail with 'error'.