-
Bug
-
Resolution: Unresolved
-
Minor
-
None
Consistently the container will disappear in the Jenkins Pipeline with both DSL and scripted job types.
TL;DR – When the job is delayed for whatever reason, in our case when using the input parameter, and wait around for greater than 5 or so minutes, the containers are just gone.
This is a major problem for us because we need to be able to control the deployment into various environments for change control.
Thanks in advance for any input..
kubernetes { label 'multi-image-' + env.BUILD_NUMBER defaultContainer 'jnlp' yaml """ apiVersion: v1 kind: Pod metadata: labels: some-label: some-label-value spec: containers: - name: notary slaveConnectTimeout: 500 image: some.private.dtr/admin/jenkinsnotary:v3 command: - cat tty: true livenessProbe: exec: command: - cat initialDelaySeconds: 5 periodSeconds: 5 volumeMounts: - name: dockersock mountPath: "/var/run/docker.sock" volumes: - name: dockersock hostPath: path: /var/run/docker.sock """ } } environment { JENKINS_JAVA_OPTIONS="-Djava.awt.headless=true -Dhudson.remoting.Launcher.pingIntervalSec=172800" }
podTemplate(label: 'multi-image-' + env.BUILD_NUMBER, containers: [containerTemplate(image: 'some.private.dtr/admin/jenkinsnotary:v3', alwaysPullImage: true, name: 'notary', command: 'cat', ttyEnabled: true, slaveConnectTimeout: 300, activeDeadlineSeconds: 172800 )], volumes: [hostPathVolume(name: 'dockersock', hostPath: '/var/run/docker.sock', mountPath: '/var/run/docker.sock')]) { }
hudson.remoting.ProxyException: java.nio.channels.ClosedChannelException at org.jenkinsci.remoting.protocol.NetworkLayer.onRecvClosed(NetworkLayer.java:154) at org.jenkinsci.remoting.protocol.impl.NIONetworkLayer.ready(NIONetworkLayer.java:142) at org.jenkinsci.remoting.protocol.IOHub$OnReady.run(IOHub.java:795) at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28) at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:59) Caused: hudson.remoting.ProxyException: hudson.remoting.ChannelClosedException: Channel "unknown": Remote call on JNLP4-connect connection from 10.255.1.161/10.255.1.161:1896 failed. The channel is closing down or has closed down at hudson.remoting.Channel.call(Channel.java:948) at hudson.FilePath.act(FilePath.java:1070) at hudson.FilePath.act(FilePath.java:1059) at hudson.FilePath.mkdirs(FilePath.java:1244) at org.jenkinsci.lib.configprovider.model.ConfigFileManager.provisionConfigFile(ConfigFileManager.java:86) at org.jenkinsci.plugins.configfiles.buildwrapper.ManagedFileUtil.provisionConfigFiles(ManagedFileUtil.java:57) at org.jenkinsci.plugins.configfiles.buildwrapper.ConfigFileBuildWrapper.setUp(ConfigFileBuildWrapper.java:66) at org.jenkinsci.plugins.workflow.steps.CoreWrapperStep$Execution.start(CoreWrapperStep.java:80) at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:270) Caused: hudson.remoting.ProxyException: org.codehaus.groovy.runtime.InvokerInvocationException: hudson.remoting.ChannelClosedException: Channel "unknown": Remote call on JNLP4-connect connection from 10.255.1.161/10.255.1.161:1896 failed. The channel is closing down or has closed down
are the pods deleted in the middle of the pipeline ? or they are still running?
can you paste the debug logs from the master ?
you could check the system properties mentioned in
JENKINS-55392and increase some timeouts there