Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-60936

Lack of log information in job when pod creation fails

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Minor Minor
    • kubernetes-plugin
    • None
    • Cloudbees Jenkins Master v. 2.176.4.3
      kubernetes-plugin 1.18.3

      When pod creation fails there is no output in the job log to show that anything is wrong. The job also seems to never complete or fail. I left it overnight and it was still running.

      Still waiting to schedule task
      All nodes of label ‘customer_smbrob_EbankUI_Multibranch_Builder_beta_55-rfrx9’ are offline
      

      Admin found relevant errors in other logs.

      From manage ui:

      Error in provisioning; agent=KubernetesSlave name: customer-smbrob-ebankui-multibranch-builder-beta-55-rfrx9-hhh3g, template=PodTemplate{inheritFrom='', name='customer_smbrob_EbankUI_Multibranch_Builder_beta_55-rfrx9-lcf98', namespace='jenkins', label='customer_smbrob_EbankUI_Multibranch_Builder_beta_55-rfrx9', serviceAccount='jenkins', nodeSelector='fs.evry.com/finods-group=dts', nodeUsageMode=EXCLUSIVE, workspaceVolume=EmptyDirWorkspaceVolume [memory=false], volumes=[org.csanchez.jenkins.plugins.kubernetes.volumes.ConfigMapVolume@39c58bb9, HostPathVolume [mountPath=/var/run/docker.sock, hostPath=/var/run/docker.sock]], containers=[ContainerTemplate{name='jnlp', image='fsnexus.evry.com:8085/jenkins/jnlp-slave:3.40-1-jdk11', workingDir='/home/jenkins/agent', command='', args='${computer.jnlpmac} ${computer.name}', ttyEnabled=true, resourceRequestCpu='', resourceRequestMemory='', resourceLimitCpu='', resourceLimitMemory='', livenessProbe=org.csanchez.jenkins.plugins.kubernetes.ContainerLivenessProbe@793f694d}], annotations=[org.csanchez.jenkins.plugins.kubernetes.PodAnnotation@aab9c821], imagePullSecrets=[org.csanchez.jenkins.plugins.kubernetes.PodImagePullSecret@c109299b], yamls=[apiVersion: v1
      kind: Pod
      spec:
        securityContext:
          runAsUser: 1000
          runAsGroup: 1000
          fsGroup: 1000
        containers:
        - name: node
          image: fsnexus.evry.com:8085/evryfs/node-dev-docker:node12
          imagePullPolicy: Always
          command:
          - cat
          tty: true
          env:
          - name: PUPPETEER_SKIP_CHROMIUM_DOWNLOAD
            value: "true"
        - name: docker
          image: docker:dind
          imagePullPolicy: Always
          env:
          - name: HOME
            value: /tmp
          command:
            - cat
          tty: true]}
      io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://icp-global.finods.com:8001/api/v1/namespaces/jenkins/pods. Message: Internal error occurred: admission webhook "trust.hooks.securityenforcement.admission.cloud.ibm.com" denied the request: 
      Deny "docker.io/docker:dind", no matching repositories in ClusterImagePolicy and no ImagePolicies in the "jenkins" namespace. Received status: Status(apiVersion=v1, code=500, details=StatusDetails(causes=[StatusCause(field=null, message=admission webhook "trust.hooks.securityenforcement.admission.cloud.ibm.com" denied the request: 
      Deny "docker.io/docker:dind", no matching repositories in ClusterImagePolicy and no ImagePolicies in the "jenkins" namespace, reason=null, additionalProperties={})], group=null, kind=null, name=null, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=Internal error occurred: admission webhook "trust.hooks.securityenforcement.admission.cloud.ibm.com" denied the request: 
      Deny "docker.io/docker:dind", no matching repositories in ClusterImagePolicy and no ImagePolicies in the "jenkins" namespace, metadata=ListMeta(_continue=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=InternalError, status=Failure, additionalProperties={}).
      	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:503)
      	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:442)
      	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:406)
      	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:365)
      	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:234)
      	at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:796)
      	at io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:326)
      	at io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:322)
      	at org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher.launch(KubernetesLauncher.java:124)
      	at hudson.slaves.SlaveComputer$1.call(SlaveComputer.java:294)
      	at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46)
      	at jenkins.security.ImpersonatingExecutorService$2.call(ImpersonatingExecutorService.java:71)
      	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
      	at java.lang.Thread.run(Thread.java:748)Jan 31, 2020 10:50:03 AM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesSla
      

       

      The job should have failed. And there should have been more information in the job log about the error so it doesn't require admin assistance to debug the issue.

       

          [JENKINS-60936] Lack of log information in job when pod creation fails

          There are no comments yet on this issue.

            Unassigned Unassigned
            kenborge Ken Børge Viktil
            Votes:
            2 Vote for this issue
            Watchers:
            3 Start watching this issue

              Created:
              Updated: