-
Bug
-
Resolution: Cannot Reproduce
-
Major
-
Dockerized Jenkins 2.67 on Azure K8s Cluster
InstallPlugins:
- kubernetes:0.12
- workflow-aggregator:2.5
- credentials-binding:1.12
- git:3.5.0
- pipeline-github-lib:1.0
- ghprb:1.39.0
- blueocean:1.1.5
- pipeline-utility-steps:1.30
- github-oauth:0.27
- github-pullrequest:0.1.0-rc25
- workflow-remote-loader:1.4
3 K8s Master and 3 WorkersDockerized Jenkins 2.67 on Azure K8s Cluster InstallPlugins: - kubernetes:0.12 - workflow-aggregator:2.5 - credentials-binding:1.12 - git:3.5.0 - pipeline-github-lib:1.0 - ghprb:1.39.0 - blueocean:1.1.5 - pipeline-utility-steps:1.30 - github-oauth:0.27 - github-pullrequest:0.1.0-rc25 - workflow-remote-loader:1.4 3 K8s Master and 3 Workers
Running a pipeline job with 2 stages
- Helm dry run install of an image
- Helm install of an image
I can see in the Jenkins master log that even when helm and tiller managed to connect I got the same error, that the latch was already released
<
SEVERE: onClose called but latch already finished. This indicates a bug in the kubernetes-plugin
Jul 31, 2017 1:45:28 PM org.jenkinsci.plugins.durabletask.ProcessLiveness isAlive
WARNING: org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1@303b5d9c; decorates hudson.Launcher$RemoteLauncher@561d1a44 on hudson.remoting.Channel@428191f0:Channel to /10.2.13.41 does not seem able to determine whether processes are alive or not
^[[45;38REXITCODE 0/home/jenkins # "ps" "-o" "pid=" "9999"
1
44
48
49
50
83
88
/home/jenkins # printf "EXITCODE %3d" $?; exit
Jul 31, 2017 1:45:28 PM org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1$1 onClose
SEVERE: onClose called but latch already finished. This indicates a bug in the kubernetes-plugin
Jul 31, 2017 1:45:28 PM org.jenkinsci.plugins.durabletask.ProcessLiveness isAlive
WARNING: org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1@63f09283; decorates hudson.Launcher$RemoteLauncher@58a8b185 on hudson.remoting.Channel@428191f0:Channel to /10.2.13.41 does not seem able to determine whether processes are alive or not
^[[45;38REXITCODE 0/home/jenkins/workspace/marketing-site_chen2-master-JZLCNMYKV67ZVLSGU6NZ6TNEGT7XMATLTBJO5DINEQDOFPPSCFCA # printf "EXITCODE %3d" $?; exit
Jul 31, 2017 1:45:29 PM org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1$1 onClose
SEVERE: onClose called but latch already finished. This indicates a bug in the kubernetes-plugin
Jul 31, 2017 1:45:29 PM org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave _terminate
INFO: Terminating Kubernetes instance for slave jenkins-slave-tgrfl-9qbfr
Jul 31, 2017 1:45:29 PM org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave _terminate
INFO: Terminated Kubernetes instance for slave jenkins-slave-tgrfl-9qbfr
EXITCODE 0Terminated Kubernetes instance for slave jenkins-slave-tgrfl-9qbfr
Jul 31, 2017 1:45:29 PM org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave _terminate
INFO: Disconnected computer jenkins-slave-tgrfl-9qbfr
Jul 31, 2017 1:45:29 PM org.jenkinsci.plugins.workflow.job.WorkflowRun finish
INFO: marketing-site/chen2-master #6 completed: FAILURE
podTemplate(label: 'jenkins-pipeline', containers: [
containerTemplate(name: 'jnlp', image: 'jenkinsci/jnlp-slave:2.62', args: '${computer.jnlpmac} ${computer.name}', workingDir: '/home/jenkins', resourceRequestCpu: '200m', resourceLimitCpu: '800m', resourceRequestMemory: '256Mi', resourceLimitMemory: '2048Mi'),
containerTemplate(name: 'helm', image: 'coalmineadmin/k8s-helm:2.5.1', command: 'cat', ttyEnabled: true)
]{
stage ('Test Cluster deployment') {
container('helm') {
// run helm chart linter
pipeline.helmLint(chart_dir)
// run dry-run helm chart installation
pipeline.helmDeploy(
dry_run : true,
name : config.app.name,
version_tag : 'chen2-master-6bdd0e3',
chart_dir : chart_dir,
replicas : config.app.replicas,
cpu : config.app.cpu,
memory : config.app.memory
)
}
}
stage ('Deploy to Kubernetes') {
container('helm') {
// Deploy using Helm chart
pipeline.helmDeploy(
dry_run : false,
name : config.app.name,
namespace : config.app.namespace,
version_tag : 'chen2-master-6bdd0e3',
chart_dir : chart_dir,
replicas : config.app.replicas,
cpu : config.app.cpu,
memory : config.app.memory
)
// Run helm tests
if (config.app.test) {
pipeline.helmTest(
name : config.app.name
)
}
}
}
}
i seem to have the same issue
Helm install/package runs smoothly on a jenkins slave containing a buildimage but after running some functionaltests the final delete of the helm deployment fails about 50% of the time.
Jenkins Log:
Pipeline error: