-
Bug
-
Resolution: Not A Defect
-
Critical
-
Running jenkins in a Kubernetes cluster on GCP
My devs are complaining of builds failing randomly when a stage starts. The builds fail when attempting to run "sh" in a container in the pods running the job.
Here is the error message I see.
[Pipeline] shrpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused "process_linux.go:87: adding pid 3786794 to cgroups caused \"failed to write 3786794 to cgroup.procs: write /sys/fs/cgroup/cpu,cpuacct/kubepods/besteffort/pod70971cd7-153a-11e9-9fe5-42010a567404/6b66fd31d9718f168c34810477e328045af5caead06e9e7f48ed3b9431eb3d37/cgroup.procs: invalid argument\""[Pipeline] echoError: java.io.IOException: Pipe closed...
...
...
ERROR: script returned exit code 1
Finished: FAILURE
[JENKINS-55527] Builds fail randomly when running sh in container
Description |
Original:
My devs are complaining of builds failing randomly when a stage starts. The builds fail when attempting to run "sh" in the cluster. Here is the error message I see. [Pipeline] shrpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused "process_linux.go:87: adding pid 3786794 to cgroups caused \"failed to write 3786794 to cgroup.procs: write /sys/fs/cgroup/cpu,cpuacct/kubepods/besteffort/pod70971cd7-153a-11e9-9fe5-42010a567404/6b66fd31d9718f168c34810477e328045af5caead06e9e7f48ed3b9431eb3d37/cgroup.procs: invalid argument\""[Pipeline] echoError: java.io.IOException: Pipe closed... ... ...ERROR: script returned exit code 1 Finished: FAILURE |
New:
My devs are complaining of builds failing randomly when a stage starts. The builds fail when attempting to run "sh" in a container in the pods running the job. Here is the error message I see. [Pipeline] shrpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused "process_linux.go:87: adding pid 3786794 to cgroups caused \"failed to write 3786794 to cgroup.procs: write /sys/fs/cgroup/cpu,cpuacct/kubepods/besteffort/pod70971cd7-153a-11e9-9fe5-42010a567404/6b66fd31d9718f168c34810477e328045af5caead06e9e7f48ed3b9431eb3d37/cgroup.procs: invalid argument\""[Pipeline] echoError: java.io.IOException: Pipe closed... ... ...ERROR: script returned exit code 1 Finished: FAILURE |
Description |
Original:
My devs are complaining of builds failing randomly when a stage starts. The builds fail when attempting to run "sh" in a container in the pods running the job. Here is the error message I see. [Pipeline] shrpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused "process_linux.go:87: adding pid 3786794 to cgroups caused \"failed to write 3786794 to cgroup.procs: write /sys/fs/cgroup/cpu,cpuacct/kubepods/besteffort/pod70971cd7-153a-11e9-9fe5-42010a567404/6b66fd31d9718f168c34810477e328045af5caead06e9e7f48ed3b9431eb3d37/cgroup.procs: invalid argument\""[Pipeline] echoError: java.io.IOException: Pipe closed... ... ...ERROR: script returned exit code 1 Finished: FAILURE |
New:
My devs are complaining of builds failing randomly when a stage starts. The builds fail when attempting to run "sh" in a container in the pods running the job. Here is the error message I see. ``` [Pipeline] shrpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused "process_linux.go:87: adding pid 3786794 to cgroups caused \"failed to write 3786794 to cgroup.procs: write /sys/fs/cgroup/cpu,cpuacct/kubepods/besteffort/pod70971cd7-153a-11e9-9fe5-42010a567404/6b66fd31d9718f168c34810477e328045af5caead06e9e7f48ed3b9431eb3d37/cgroup.procs: invalid argument\""[Pipeline] echoError: java.io.IOException: Pipe closed... ... ...ERROR: script returned exit code 1 Finished: FAILURE ``` |
Description |
Original:
My devs are complaining of builds failing randomly when a stage starts. The builds fail when attempting to run "sh" in a container in the pods running the job. Here is the error message I see. ``` [Pipeline] shrpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused "process_linux.go:87: adding pid 3786794 to cgroups caused \"failed to write 3786794 to cgroup.procs: write /sys/fs/cgroup/cpu,cpuacct/kubepods/besteffort/pod70971cd7-153a-11e9-9fe5-42010a567404/6b66fd31d9718f168c34810477e328045af5caead06e9e7f48ed3b9431eb3d37/cgroup.procs: invalid argument\""[Pipeline] echoError: java.io.IOException: Pipe closed... ... ...ERROR: script returned exit code 1 Finished: FAILURE ``` |
New:
My devs are complaining of builds failing randomly when a stage starts. The builds fail when attempting to run "sh" in a container in the pods running the job. Here is the error message I see. [Pipeline] shrpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused "process_linux.go:87: adding pid 3786794 to cgroups caused \"failed to write 3786794 to cgroup.procs: write /sys/fs/cgroup/cpu,cpuacct/kubepods/besteffort/pod70971cd7-153a-11e9-9fe5-42010a567404/6b66fd31d9718f168c34810477e328045af5caead06e9e7f48ed3b9431eb3d37/cgroup.procs: invalid argument\""[Pipeline] echoError: java.io.IOException: Pipe closed... ... ...ERROR: script returned exit code 1 Finished: FAILURE |
Description |
Original:
My devs are complaining of builds failing randomly when a stage starts. The builds fail when attempting to run "sh" in a container in the pods running the job. Here is the error message I see. [Pipeline] shrpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused "process_linux.go:87: adding pid 3786794 to cgroups caused \"failed to write 3786794 to cgroup.procs: write /sys/fs/cgroup/cpu,cpuacct/kubepods/besteffort/pod70971cd7-153a-11e9-9fe5-42010a567404/6b66fd31d9718f168c34810477e328045af5caead06e9e7f48ed3b9431eb3d37/cgroup.procs: invalid argument\""[Pipeline] echoError: java.io.IOException: Pipe closed... ... ...ERROR: script returned exit code 1 Finished: FAILURE |
New:
My devs are complaining of builds failing randomly when a stage starts. The builds fail when attempting to run "sh" in a container in the pods running the job. Here is the error message I see. [Pipeline] shrpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused "process_linux.go:87: adding pid 3786794 to cgroups caused \"failed to write 3786794 to cgroup.procs: write /sys/fs/cgroup/cpu,cpuacct/kubepods/besteffort/pod70971cd7-153a-11e9-9fe5-42010a567404/6b66fd31d9718f168c34810477e328045af5caead06e9e7f48ed3b9431eb3d37/cgroup.procs: invalid argument\""[Pipeline] echoError: java.io.IOException: Pipe closed... ... ... ERROR: script returned exit code 1 Finished: FAILURE |
Priority | Original: Major [ 3 ] | New: Critical [ 2 ] |
Resolution | New: Not A Defect [ 7 ] | |
Status | Original: Open [ 1 ] | New: Closed [ 6 ] |