Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-48571

checkout scm fails silently after "Could not determine exact tip revision of <branch>" in logs

      This Pipeline:

      pipeline {
          agent none
          options {
              buildDiscarder(logRotator(numToKeepStr: '10'))
              timeout(time: 1, unit: 'HOURS')
          }
          stages {
              stage('Test') {
                  failFast true
                  parallel {
                      stage('Debian Linux') {
                          agent { docker 'maven:slim' }
                          steps {
                              checkout scm
                              sh 'mvn test -B'
                          }
                          post {
                              always {
                                  junit testResults: '**/surefire-reports/**/*.xml', allowEmptyResults: true
                                  archiveArtifacts artifacts: '**/*.jar', fingerprint: true
                              }
                          }
                      }
                      stage('Alpine Linux') {
                          agent { docker 'maven:3-alpine' }
                          steps {
                              checkout scm
                              sh 'mvn test -B'
                          }
                          post {
                              always {
                                  junit testResults: '**/surefire-reports/**/*.xml', allowEmptyResults: true
                                  archiveArtifacts artifacts: '**/*.jar', fingerprint: true
                              }
                          }
                      }
                      stage('FreeBSD 11') {
                          agent { label 'freebsd' }
                          steps {
                              echo 'Code Valet does not currently support Maven on FreeBSD'
                          }
                      }
                  }
              }
          }
      }
      

      Doesn't seem to actually execute anything, here's the raw output of the run:

      Started by user R. Tyler Croy
      ERROR: Could not determine exact tip revision of defaults; falling back to nondeterministic checkout
      Checking out git https://github.com/rtyler/joni.git into /var/jenkins_home/workspace/joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ@script to read Jenkinsfile
      Fetching changes from the remote Git repository
      Fetching without tags
      Checking out Revision 63d07e3559aece171ec6b7e5f4a595155ab99ac2 (origin/defaults)
      Commit message: "@abayer says to try this"
      Loading library pipeline-library@master
      Attempting to resolve master from remote references...
      Found match: refs/heads/master revision fecdabab952e9647b513c91367202e22e5161981
      Fetching changes from the remote Git repository
      Fetching without tags
      Checking out Revision fecdabab952e9647b513c91367202e22e5161981 (master)
      Commit message: "Make sure we check out scm, duh"
      Loading library inline-pipeline-secrets@master
      Attempting to resolve master from remote references...
      Found match: refs/heads/master revision 0b1840825b47d0a207151c22d49259b42f208728
      Fetching changes from the remote Git repository
      Fetching without tags
      Checking out Revision 0b1840825b47d0a207151c22d49259b42f208728 (master)
      Commit message: "Here be dragons"
      [Pipeline] timeout
      Timeout set to expire in 1 hr 0 min
      [Pipeline] {
      [Pipeline] stage
      [Pipeline] { (Test)
      [Pipeline] parallel
      [Pipeline] [Debian Linux] { (Branch: Debian Linux)
      [Pipeline] [Alpine Linux] { (Branch: Alpine Linux)
      [Pipeline] [FreeBSD 11] { (Branch: FreeBSD 11)
      [Pipeline] [Debian Linux] stage
      [Pipeline] [Debian Linux] { (Debian Linux)
      [Pipeline] [Alpine Linux] stage
      [Pipeline] [Alpine Linux] { (Alpine Linux)
      [Pipeline] [FreeBSD 11] stage
      [Pipeline] [FreeBSD 11] { (FreeBSD 11)
      [Pipeline] [FreeBSD 11] node
      [Pipeline] [Debian Linux] node
      [Debian Linux] Running on docker-ubuntuab9e60 in /home/azureuser/workspace/workspace/joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ
      [Pipeline] [Alpine Linux] node
      [Pipeline] [Debian Linux] {
      [Pipeline] [Debian Linux] sh
      [Debian Linux] [joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ] Running shell script
      [Debian Linux] + docker inspect -f . maven:slim
      [Debian Linux] 
      [Debian Linux] Error: No such object: maven:slim
      [Pipeline] [Debian Linux] sh
      [Debian Linux] [joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ] Running shell script
      [Debian Linux] + docker pull maven:slim
      [Debian Linux] slim: Pulling from library/maven
      [Debian Linux] e7bb522d92ff: Pulling fs layer
      [Debian Linux] acf3a7df1b51: Pulling fs layer
      [Debian Linux] c1c98005fcff: Pulling fs layer
      [Debian Linux] 39dcc90226db: Pulling fs layer
      [Debian Linux] 23649b2102b0: Pulling fs layer
      [Debian Linux] dc6f0e3cd819: Pulling fs layer
      [Debian Linux] a1b832f08af6: Pulling fs layer
      [Debian Linux] 85571d835004: Pulling fs layer
      [Debian Linux] 26c8abdc6384: Pulling fs layer
      [Debian Linux] d11ca4afc9f8: Pulling fs layer
      [Debian Linux] 39dcc90226db: Waiting
      [Debian Linux] 23649b2102b0: Waiting
      [Debian Linux] dc6f0e3cd819: Waiting
      [Debian Linux] a1b832f08af6: Waiting
      [Debian Linux] 85571d835004: Waiting
      [Debian Linux] 26c8abdc6384: Waiting
      [Debian Linux] d11ca4afc9f8: Waiting
      [Debian Linux] c1c98005fcff: Verifying Checksum
      [Debian Linux] c1c98005fcff: Download complete
      [Debian Linux] acf3a7df1b51: Verifying Checksum
      [Debian Linux] acf3a7df1b51: Download complete
      [Debian Linux] e7bb522d92ff: Verifying Checksum
      [Debian Linux] e7bb522d92ff: Download complete
      [Debian Linux] 39dcc90226db: Verifying Checksum
      [Debian Linux] 39dcc90226db: Download complete
      [Debian Linux] dc6f0e3cd819: Download complete
      [Debian Linux] a1b832f08af6: Verifying Checksum
      [Debian Linux] a1b832f08af6: Download complete
      [Debian Linux] 85571d835004: Verifying Checksum
      [Debian Linux] 85571d835004: Download complete
      [Debian Linux] 26c8abdc6384: Verifying Checksum
      [Debian Linux] 26c8abdc6384: Download complete
      [Debian Linux] d11ca4afc9f8: Download complete
      [Debian Linux] 23649b2102b0: Verifying Checksum
      [Debian Linux] 23649b2102b0: Download complete
      [Debian Linux] e7bb522d92ff: Pull complete
      [Debian Linux] acf3a7df1b51: Pull complete
      [Debian Linux] c1c98005fcff: Pull complete
      [Debian Linux] 39dcc90226db: Pull complete
      [Debian Linux] 23649b2102b0: Pull complete
      [Debian Linux] dc6f0e3cd819: Pull complete
      [Debian Linux] a1b832f08af6: Pull complete
      [Debian Linux] 85571d835004: Pull complete
      [Debian Linux] 26c8abdc6384: Pull complete
      [Debian Linux] d11ca4afc9f8: Pull complete
      [Debian Linux] Digest: sha256:029478613539ddc6ed15bb267e60285500fdd2a505aa08479583d481a6ba5a20
      [Debian Linux] Status: Downloaded newer image for maven:slim
      [FreeBSD 11] Still waiting to schedule task
      [FreeBSD 11] freebsd-11-3a49c0 is offline
      [Alpine Linux] Still waiting to schedule task
      [Alpine Linux] Waiting for next available executor on docker
      [Pipeline] [Debian Linux] withDockerContainer
      [Debian Linux] docker-ubuntuab9e60 does not seem to be running inside a container
      [Debian Linux] $ docker run -t -d -u 1000:1000 -w /home/azureuser/workspace/workspace/joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ -v /home/azureuser/workspace/workspace/joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ:/home/azureuser/workspace/workspace/joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ:rw,z -v /home/azureuser/workspace/workspace/joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ@tmp:/home/azureuser/workspace/workspace/joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ@tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** --entrypoint cat maven:slim
      [Pipeline] [Debian Linux] {
      Post stage
      [Pipeline] [Debian Linux] junit
      [Debian Linux] Recording test results
      [Pipeline] [Debian Linux] archiveArtifacts
      [Debian Linux] Archiving artifacts
      [Pipeline] [Debian Linux] }
      [Debian Linux] $ docker stop --time=1 0e293d2e7e2c7658f2453c28be97784ebc685ba457b8041c72456a4d3d4f4f71
      [Debian Linux] $ docker rm -f 0e293d2e7e2c7658f2453c28be97784ebc685ba457b8041c72456a4d3d4f4f71
      [Pipeline] [Debian Linux] // withDockerContainer
      [Alpine Linux] Running on docker-ubuntuab9e60 in /home/azureuser/workspace/workspace/joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ
      [Pipeline] [Debian Linux] }
      [Pipeline] [Debian Linux] // node
      [Pipeline] [Alpine Linux] {
      [Pipeline] [Debian Linux] }
      [Pipeline] [Debian Linux] // stage
      [Pipeline] [Debian Linux] }
      [Debian Linux] Failed in branch Debian Linux
      [Pipeline] [Alpine Linux] }
      [Pipeline] [FreeBSD 11] // node
      [Pipeline] [Alpine Linux] // node
      [Pipeline] [Alpine Linux] }
      [Pipeline] [FreeBSD 11] }
      [Pipeline] [Alpine Linux] // stage
      [Pipeline] [FreeBSD 11] // stage
      [Pipeline] [Alpine Linux] }
      [Alpine Linux] Failed in branch Alpine Linux
      [Pipeline] [FreeBSD 11] }
      [FreeBSD 11] Failed in branch FreeBSD 11
      [Pipeline] // parallel
      [Pipeline] }
      [Pipeline] // stage
      [Pipeline] }
      [Pipeline] // timeout
      [Pipeline] End of Pipeline
      ERROR: Could not determine exact tip revision of defaults
      Finished: ABORTED
      

      Plugins built into Code Valet:

      build/repos/apache-httpcomponents-client-4-api-plugin,b632272042139465076fa378e81f3fee729f885f
      build/repos/authentication-tokens-plugin,2a38368d4d9f61a6db0fde5b2f551293fc659cfe
      build/repos/azure-commons-plugin,b585a8f787f57e5552063db224e04a54454c1c87
      build/repos/azure-credentials-plugin,6ea5592773ad84f2f48854d1c611e1a529a22a9d
      build/repos/azure-vm-agents-plugin,e92e1955107b74be9536be2f57d1025966f9dde8
      build/repos/blueocean-autofavorite-plugin,0ddd2f7ea21c83d11a994ae569c56f970e7ba926
      build/repos/blueocean-display-url-plugin,d6515117712c9b30a32f32bb6e1a977711b9b037
      build/repos/blueocean-plugin,bb34e738f8430643086cdc8ac686d681ec90f9e1
      build/repos/branch-api-plugin,661e921eef2977970585829a691cbc0595dffea1
      build/repos/cloudbees-bitbucket-branch-source-plugin,b34bd66c7db1ce4f8b2abeb719530d7f7468cbc4
      build/repos/cloudbees-folder-plugin,afddb64d3704a3a7d2a87f8280f8748416b77b2d
      build/repos/cloud-stats-plugin,701ba1e7bf000e05fa5b436961f6125fbafa2e7c
      build/repos/credentials-binding-plugin,4d6f4c44baaa5aa1dd82a7b64ce4d8990a43d2fe
      build/repos/credentials-plugin,412ee9702029dc48bb7406fa9318ef8f2fc49b76
      build/repos/datadog-plugin,accf6ab44532c79613347416d5397c4156572cd4
      build/repos/display-url-api-plugin,e00542dea8bd0ee339863a11f5b61b872c3eacbd
      build/repos/docker-commons-plugin,b34c6f3ca4216eb66b09af6eef73db5f3c33adc2
      build/repos/docker-workflow-plugin,0e95d9409eb7d219dd9f4f37383c5321c630b889
      build/repos/durable-task-plugin,275ad653f71d89008ff061f6d8133d6684dcb1ad
      build/repos/embeddable-build-status-plugin,df555760b1a669a9acd558c5a498970bcf7febbe
      build/repos/favorite-plugin,08dcfd6b4fcb2b7c25efdf9ae995d0fedb8acdc2
      build/repos/git-client-plugin,5526a61878c1c789b545e0a81eecac859aca87b6
      build/repos/github-api-plugin,182e264acf362cf973362988cdfb24cb74738fab
      build/repos/github-branch-source-plugin,08b3d320281c74ef41c4d8ee064623fa75179c1d
      build/repos/github-oauth-plugin,5ad606efeedfd9ede0cbde31ec608ea1ba90ced1
      build/repos/github-plugin,68ceb5960549c6a5ce55c5288c7eaabbbb3719a2
      build/repos/git-plugin,828ca74769783d2ccd2c16ce038a87cacc66e140
      build/repos/git-server-plugin,1762ba8ccf3c7a46a607b912736912b784a63524
      build/repos/htmlpublisher-plugin,f187d56a4cd3fdcba69d0abc9deaac5be072d2d5
      build/repos/jackson2-api-plugin,74355ea21dacf9e948724d10c9abca0e2c3296bf
      build/repos/jira-plugin,4c7dcde762b55beca74176ec5e837d1784a81ae7
      build/repos/jsch-plugin,dfa3a710c4827896269302503ae86ae026d202f0
      build/repos/js-libs-plugin,79ca191724036878a88f13325851af14b0c70452
      build/repos/junit-plugin,0061cf267c63f6bb64bc165f395e01c0c8c38fd5
      build/repos/kubernetes-plugin,ec7d74f8782941d2b146193ebb14ae6ef5da9a46
      build/repos/lockable-resources-plugin,14b1abaddf441da9a20e8b1e0d94844550529a1e
      build/repos/mailer-plugin,18b8274e1a31b60d7e20492f1ecfa39483b90b37
      build/repos/matrix-auth-plugin,9c859ed3ea932024e73f665400457cbf106b8dcf
      build/repos/matrix-project-plugin,9e4d3bf904094986ea111a90a0c2c14019e2dd7f
      build/repos/mercurial-plugin,f6e5f0bff2d8678c0e7cb13d0db031b45c7a0437
      build/repos/pipeline-build-step-plugin,2e4012ecac352d248ca42feaa18c59aabfe2fa2c
      build/repos/pipeline-graph-analysis-plugin,7ea371dc90fef8e4f0627a365320fceec67f089f
      build/repos/pipeline-input-step-plugin,7aea2abf486438200cb1ae3fb553311d7ced11e0
      build/repos/pipeline-milestone-step-plugin,f7ef68f74aa1e923bb6ca3cdd10541fa6040f123
      build/repos/pipeline-model-definition-plugin,32c5e4178c89faca3ad184b4030d8f403425bdac
      build/repos/pipeline-stage-step-plugin,addb287b9d5b81f3d4ab3b15fb6dd33e7370062a
      build/repos/pipeline-stage-view-plugin,f5ce1f768457c7ff18408e6a54cb84f47a6a4ed8
      build/repos/plain-credentials-plugin,da51ba8703eefb201f3f6c4f4da3714fb83a37d6
      build/repos/pubsub-light-plugin,e374bc0248f37f6572e87913bf0c27cbf5b75d53
      build/repos/scm-api-plugin,45818a22f9d9846cfeb13d9c3f37d0b0dba15e04
      build/repos/script-security-plugin,38e6f6f7850b539a9a430d6ce5fd8c2e146a3181
      build/repos/sentry-plugin,a29347c83966fa0068a3c00af3fefcd9f19ed329
      build/repos/sse-gateway-plugin,685c6c709c96aa63f7d781c0df9060b9928e8b41
      build/repos/ssh-credentials-plugin,55e3d318eeddd52575b2ed632aa3841ba3b4834e
      build/repos/structs-plugin,eb9c1d5d5b1a9794925b62e17b9b4b1ee2113b13
      build/repos/token-macro-plugin,4d24aa5716f8d84e236824ab36d87c90a9897e8d
      build/repos/variant-plugin,3688261cf3c030b3eed603cf96e4758f79b569ac
      build/repos/workflow-aggregator-plugin,d67c39534f908f8432b44e588b0350b064e86bb5
      build/repos/workflow-api-plugin,bbab9280f0ce01988173ade26f8f349b14494499
      build/repos/workflow-basic-steps-plugin,3a464997109f0814f2399d15a2730e49ad74651c
      build/repos/workflow-cps-global-lib-plugin,aaa7ed1e04ce2ef751b2a770e71f0286c509ddc6
      build/repos/workflow-cps-plugin,861996956e7f931d6190af28fa0c0083d09b1d4d
      build/repos/workflow-durable-task-step-plugin,603b62f65ac5796a80b5598685b34ac30a644885
      build/repos/workflow-job-plugin,f3f45712196c9bea60101dbc8b804f6309f69cf2
      build/repos/workflow-multibranch-plugin,c49261f827d032a637475071ba6742f0c40a8653
      build/repos/workflow-scm-step-plugin,b9e8530ca4173b499a17af0468deace17139d458
      build/repos/workflow-step-api-plugin,0b984e5df55b88c39efb9a649e226fba48f5cb8f
      build/repos/workflow-support-plugin,5146dbf08bf4cfe84de9c6744ddfd18e5827b243
      

          [JENKINS-48571] checkout scm fails silently after "Could not determine exact tip revision of <branch>" in logs

          Michael Neale added a comment -

          oh this is a good one

          Michael Neale added a comment - oh this is a good one

          R. Tyler Croy added a comment -

          I don't think parallel has anything to do with this, I think something in either the docker pipeline master branch, or declarative master branch is broken.

          pipeline {
              agent none
              options {
                  buildDiscarder(logRotator(numToKeepStr: '10'))
                  timeout(time: 1, unit: 'HOURS')
              }
              stages {
                  stage('Debian Linux') {
                      agent { docker 'maven:slim' }
                      steps {
                          checkout scm
                          sh 'mvn test -B'
                      }
                      post {
                          always {
                              junit testResults: '**/surefire-reports/**/*.xml', allowEmptyResults: true
                              archiveArtifacts artifacts: '**/*.jar', fingerprint: true
                          }
                      }
                  }
                  stage('Alpine Linux') {
                      agent { docker 'maven:3-alpine' }
                      steps {
                          checkout scm
                          sh 'mvn test -B'
                      }
                      post {
                          always {
                              junit testResults: '**/surefire-reports/**/*.xml', allowEmptyResults: true
                              archiveArtifacts artifacts: '**/*.jar', fingerprint: true
                          }
                      }
                  }
                  stage('FreeBSD 11') {
                      agent { label 'freebsd' }
                      steps {
                          echo 'Code Valet does not currently support Maven on FreeBSD'
                      }
                  }
              }
          }
          
          Started by user R. Tyler Croy
          ERROR: Could not determine exact tip revision of defaults; falling back to nondeterministic checkout
          Checking out git https://github.com/rtyler/joni.git into /var/jenkins_home/workspace/joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ@script to read Jenkinsfile
          Fetching changes from the remote Git repository
          Fetching without tags
          Checking out Revision b7c73a1b840e43bbc23e520112f98d6805cbeecf (origin/defaults)
          Commit message: "Linear Pipeline for testing"
          Loading library pipeline-library@master
          Attempting to resolve master from remote references...
          Found match: refs/heads/master revision cc8272a3a18c24736625675fb0edd64622fff689
          Fetching changes from the remote Git repository
          Fetching without tags
          Checking out Revision cc8272a3a18c24736625675fb0edd64622fff689 (master)
          Commit message: "Don't do parallel by default until JENKINS-48571 is fixed"
          Loading library inline-pipeline-secrets@master
          Attempting to resolve master from remote references...
          Found match: refs/heads/master revision 0b1840825b47d0a207151c22d49259b42f208728
          Fetching changes from the remote Git repository
          Fetching without tags
          Checking out Revision 0b1840825b47d0a207151c22d49259b42f208728 (master)
          Commit message: "Here be dragons"
          [Pipeline] timeout
          Timeout set to expire in 1 hr 0 min
          [Pipeline] {
          [Pipeline] stage
          [Pipeline] { (Debian Linux)
          [Pipeline] node
          Running on docker-ubuntuab9e60 in /home/azureuser/workspace/workspace/joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ
          [Pipeline] {
          [Pipeline] sh
          [joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ] Running shell script
          + docker inspect -f . maven:slim
          .
          [Pipeline] withDockerContainer
          docker-ubuntuab9e60 does not seem to be running inside a container
          $ docker run -t -d -u 1000:1000 -w /home/azureuser/workspace/workspace/joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ -v /home/azureuser/workspace/workspace/joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ:/home/azureuser/workspace/workspace/joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ:rw,z -v /home/azureuser/workspace/workspace/joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ@tmp:/home/azureuser/workspace/workspace/joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ@tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** --entrypoint cat maven:slim
          [Pipeline] {
          Post stage
          [Pipeline] junit
          Recording test results
          [Pipeline] archiveArtifacts
          Archiving artifacts
          [Pipeline] }
          $ docker stop --time=1 f72b31232b49ea5635e77a3adf5d0ab0b8cc84b3ec08d6893cd8832169cea0ae
          $ docker rm -f f72b31232b49ea5635e77a3adf5d0ab0b8cc84b3ec08d6893cd8832169cea0ae
          [Pipeline] // withDockerContainer
          [Pipeline] }
          [Pipeline] // node
          [Pipeline] }
          [Pipeline] // stage
          [Pipeline] stage
          [Pipeline] { (Alpine Linux)
          Stage 'Alpine Linux' skipped due to earlier failure(s)
          [Pipeline] }
          [Pipeline] // stage
          [Pipeline] stage
          [Pipeline] { (FreeBSD 11)
          Stage 'FreeBSD 11' skipped due to earlier failure(s)
          [Pipeline] }
          [Pipeline] // stage
          [Pipeline] }
          [Pipeline] // timeout
          [Pipeline] End of Pipeline
          ERROR: Could not determine exact tip revision of defaults
          Finished: FAILURE
          
          

          R. Tyler Croy added a comment - I don't think parallel has anything to do with this, I think something in either the docker pipeline master branch, or declarative master branch is broken. pipeline { agent none options { buildDiscarder(logRotator(numToKeepStr: '10' )) timeout(time: 1, unit: 'HOURS' ) } stages { stage( 'Debian Linux' ) { agent { docker 'maven:slim' } steps { checkout scm sh 'mvn test -B' } post { always { junit testResults: '**/surefire-reports /**/ *.xml' , allowEmptyResults: true archiveArtifacts artifacts: '**/*.jar' , fingerprint: true } } } stage( 'Alpine Linux' ) { agent { docker 'maven:3-alpine' } steps { checkout scm sh 'mvn test -B' } post { always { junit testResults: '**/surefire-reports /**/ *.xml' , allowEmptyResults: true archiveArtifacts artifacts: '**/*.jar' , fingerprint: true } } } stage( 'FreeBSD 11' ) { agent { label 'freebsd' } steps { echo 'Code Valet does not currently support Maven on FreeBSD' } } } } Started by user R. Tyler Croy ERROR: Could not determine exact tip revision of defaults; falling back to nondeterministic checkout Checking out git https: //github.com/rtyler/joni.git into / var /jenkins_home/workspace/joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ@script to read Jenkinsfile Fetching changes from the remote Git repository Fetching without tags Checking out Revision b7c73a1b840e43bbc23e520112f98d6805cbeecf (origin/defaults) Commit message: "Linear Pipeline for testing" Loading library pipeline-library@master Attempting to resolve master from remote references... Found match: refs/heads/master revision cc8272a3a18c24736625675fb0edd64622fff689 Fetching changes from the remote Git repository Fetching without tags Checking out Revision cc8272a3a18c24736625675fb0edd64622fff689 (master) Commit message: "Don't do parallel by default until JENKINS-48571 is fixed" Loading library inline-pipeline-secrets@master Attempting to resolve master from remote references... Found match: refs/heads/master revision 0b1840825b47d0a207151c22d49259b42f208728 Fetching changes from the remote Git repository Fetching without tags Checking out Revision 0b1840825b47d0a207151c22d49259b42f208728 (master) Commit message: "Here be dragons" [Pipeline] timeout Timeout set to expire in 1 hr 0 min [Pipeline] { [Pipeline] stage [Pipeline] { (Debian Linux) [Pipeline] node Running on docker-ubuntuab9e60 in /home/azureuser/workspace/workspace/joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ [Pipeline] { [Pipeline] sh [joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ] Running shell script + docker inspect -f . maven:slim . [Pipeline] withDockerContainer docker-ubuntuab9e60 does not seem to be running inside a container $ docker run -t -d -u 1000:1000 -w /home/azureuser/workspace/workspace/joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ -v /home/azureuser/workspace/workspace/joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ:/home/azureuser/workspace/workspace/joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ:rw,z -v /home/azureuser/workspace/workspace/joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ@tmp:/home/azureuser/workspace/workspace/joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ@tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** --entrypoint cat maven:slim [Pipeline] { Post stage [Pipeline] junit Recording test results [Pipeline] archiveArtifacts Archiving artifacts [Pipeline] } $ docker stop --time=1 f72b31232b49ea5635e77a3adf5d0ab0b8cc84b3ec08d6893cd8832169cea0ae $ docker rm -f f72b31232b49ea5635e77a3adf5d0ab0b8cc84b3ec08d6893cd8832169cea0ae [Pipeline] // withDockerContainer [Pipeline] } [Pipeline] // node [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (Alpine Linux) Stage 'Alpine Linux' skipped due to earlier failure(s) [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (FreeBSD 11) Stage 'FreeBSD 11' skipped due to earlier failure(s) [Pipeline] } [Pipeline] // stage [Pipeline] } [Pipeline] // timeout [Pipeline] End of Pipeline ERROR: Could not determine exact tip revision of defaults Finished: FAILURE

          R. Tyler Croy added a comment -

          An even simpler example:

          pipeline {
              agent { docker 'maven:slim' }
              options {
                  buildDiscarder(logRotator(numToKeepStr: '10'))
                  timeout(time: 1, unit: 'HOURS')
              }
              stages {
                  stage('Debian Linux') {
                      steps {
                          checkout scm
                          sh 'mvn test -B'
                      }
                      post {
                          always {
                              junit testResults: '**/surefire-reports/**/*.xml', allowEmptyResults: true
                              archiveArtifacts artifacts: '**/*.jar', fingerprint: true
                          }
                      }
                  }
              }
          }
          
          
          Started by user R. Tyler Croy
          ERROR: Could not determine exact tip revision of defaults; falling back to nondeterministic checkout
          Checking out git https://github.com/rtyler/joni.git into /var/jenkins_home/workspace/joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ@script to read Jenkinsfile
          Fetching changes from the remote Git repository
          Fetching without tags
          Checking out Revision 683bca6508b6ea3ede88d5eb6f459a0757f6a63f (origin/defaults)
          Commit message: "What if we just used a top-level docker agent"
          Loading library pipeline-library@master
          Attempting to resolve master from remote references...
          Found match: refs/heads/master revision cc8272a3a18c24736625675fb0edd64622fff689
          Fetching changes from the remote Git repository
          Fetching without tags
          Checking out Revision cc8272a3a18c24736625675fb0edd64622fff689 (master)
          Commit message: "Don't do parallel by default until JENKINS-48571 is fixed"
          Loading library inline-pipeline-secrets@master
          Attempting to resolve master from remote references...
          Found match: refs/heads/master revision 0b1840825b47d0a207151c22d49259b42f208728
          Fetching changes from the remote Git repository
          Fetching without tags
          Checking out Revision 0b1840825b47d0a207151c22d49259b42f208728 (master)
          Commit message: "Here be dragons"
          [Pipeline] node
          Running on docker-ubuntuab9e60 in /home/azureuser/workspace/workspace/joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ
          [Pipeline] {
          [Pipeline] stage
          [Pipeline] { (Declarative: Agent Setup)
          [Pipeline] sh
          [joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ] Running shell script
          + docker pull maven:slim
          slim: Pulling from library/maven
          Digest: sha256:029478613539ddc6ed15bb267e60285500fdd2a505aa08479583d481a6ba5a20
          Status: Image is up to date for maven:slim
          [Pipeline] }
          [Pipeline] // stage
          [Pipeline] sh
          [joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ] Running shell script
          + docker inspect -f . maven:slim
          .
          [Pipeline] withDockerContainer
          docker-ubuntuab9e60 does not seem to be running inside a container
          $ docker run -t -d -u 1000:1000 -w /home/azureuser/workspace/workspace/joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ -v /home/azureuser/workspace/workspace/joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ:/home/azureuser/workspace/workspace/joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ:rw,z -v /home/azureuser/workspace/workspace/joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ@tmp:/home/azureuser/workspace/workspace/joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ@tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** --entrypoint cat maven:slim
          [Pipeline] {
          [Pipeline] timeout
          Timeout set to expire in 1 hr 0 min
          [Pipeline] {
          [Pipeline] stage
          [Pipeline] { (Debian Linux)
          Post stage
          [Pipeline] junit
          Recording test results
          [Pipeline] archiveArtifacts
          Archiving artifacts
          [Pipeline] }
          [Pipeline] // stage
          [Pipeline] }
          [Pipeline] // timeout
          [Pipeline] }
          $ docker stop --time=1 d88820f294211bd6a43180e13bb3dcb703960294e1d228520a68a1cd77590ffc
          $ docker rm -f d88820f294211bd6a43180e13bb3dcb703960294e1d228520a68a1cd77590ffc
          [Pipeline] // withDockerContainer
          [Pipeline] }
          [Pipeline] // node
          [Pipeline] End of Pipeline
          ERROR: Could not determine exact tip revision of defaults
          Finished: FAILURE
          
          

          R. Tyler Croy added a comment - An even simpler example: pipeline { agent { docker 'maven:slim' } options { buildDiscarder(logRotator(numToKeepStr: '10' )) timeout(time: 1, unit: 'HOURS' ) } stages { stage( 'Debian Linux' ) { steps { checkout scm sh 'mvn test -B' } post { always { junit testResults: '**/surefire-reports /**/ *.xml' , allowEmptyResults: true archiveArtifacts artifacts: '**/*.jar' , fingerprint: true } } } } } Started by user R. Tyler Croy ERROR: Could not determine exact tip revision of defaults; falling back to nondeterministic checkout Checking out git https: //github.com/rtyler/joni.git into / var /jenkins_home/workspace/joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ@script to read Jenkinsfile Fetching changes from the remote Git repository Fetching without tags Checking out Revision 683bca6508b6ea3ede88d5eb6f459a0757f6a63f (origin/defaults) Commit message: "What if we just used a top-level docker agent" Loading library pipeline-library@master Attempting to resolve master from remote references... Found match: refs/heads/master revision cc8272a3a18c24736625675fb0edd64622fff689 Fetching changes from the remote Git repository Fetching without tags Checking out Revision cc8272a3a18c24736625675fb0edd64622fff689 (master) Commit message: "Don't do parallel by default until JENKINS-48571 is fixed" Loading library inline-pipeline-secrets@master Attempting to resolve master from remote references... Found match: refs/heads/master revision 0b1840825b47d0a207151c22d49259b42f208728 Fetching changes from the remote Git repository Fetching without tags Checking out Revision 0b1840825b47d0a207151c22d49259b42f208728 (master) Commit message: "Here be dragons" [Pipeline] node Running on docker-ubuntuab9e60 in /home/azureuser/workspace/workspace/joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ [Pipeline] { [Pipeline] stage [Pipeline] { (Declarative: Agent Setup) [Pipeline] sh [joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ] Running shell script + docker pull maven:slim slim: Pulling from library/maven Digest: sha256:029478613539ddc6ed15bb267e60285500fdd2a505aa08479583d481a6ba5a20 Status: Image is up to date for maven:slim [Pipeline] } [Pipeline] // stage [Pipeline] sh [joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ] Running shell script + docker inspect -f . maven:slim . [Pipeline] withDockerContainer docker-ubuntuab9e60 does not seem to be running inside a container $ docker run -t -d -u 1000:1000 -w /home/azureuser/workspace/workspace/joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ -v /home/azureuser/workspace/workspace/joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ:/home/azureuser/workspace/workspace/joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ:rw,z -v /home/azureuser/workspace/workspace/joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ@tmp:/home/azureuser/workspace/workspace/joni_defaults-MBT2YSTY7MUAHVKKYHYN6TISLWL6VSKQMMQ2D2GOPWZJSC72ENZQ@tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** --entrypoint cat maven:slim [Pipeline] { [Pipeline] timeout Timeout set to expire in 1 hr 0 min [Pipeline] { [Pipeline] stage [Pipeline] { (Debian Linux) Post stage [Pipeline] junit Recording test results [Pipeline] archiveArtifacts Archiving artifacts [Pipeline] } [Pipeline] // stage [Pipeline] } [Pipeline] // timeout [Pipeline] } $ docker stop --time=1 d88820f294211bd6a43180e13bb3dcb703960294e1d228520a68a1cd77590ffc $ docker rm -f d88820f294211bd6a43180e13bb3dcb703960294e1d228520a68a1cd77590ffc [Pipeline] // withDockerContainer [Pipeline] } [Pipeline] // node [Pipeline] End of Pipeline ERROR: Could not determine exact tip revision of defaults Finished: FAILURE

          Andrew Bayer added a comment -

          So far, can't reproduce this.

          Andrew Bayer added a comment - So far, can't reproduce this.

          Andrew Bayer added a comment -

          ERROR: Could not determine exact tip revision of defaults seems to be the problem, I'm assuming. I don't see that in any of my testing.

          Andrew Bayer added a comment - ERROR: Could not determine exact tip revision of defaults seems to be the problem, I'm assuming. I don't see that in any of my testing.

          Andrew Bayer added a comment -

          Here is where that message is coming from. Something's wrong with fetching from the SCM source...

          Andrew Bayer added a comment - Here is where that message is coming from. Something's wrong with fetching from the SCM source...

          Andrew Bayer added a comment -

          Digging further, if it got so far as Connector.checkConnectionValidity in GitHubSCMSource#retrieve, we'd see something in the logs like Connecting to https://api.github.com using ..., which isn't present. So whatever's wrong is happening between scmSource.fetch(head,listener) in SCMBinder (I think, given the error message) getting called and that call making it to one of the first few lines of GitHubSCMSource#retrieve. I dunno. Punting to stephenconnolly =)

          Andrew Bayer added a comment - Digging further, if it got so far as Connector.checkConnectionValidity in GitHubSCMSource#retrieve , we'd see something in the logs like Connecting to https://api.github.com using ... , which isn't present. So whatever's wrong is happening between scmSource.fetch(head,listener) in SCMBinder (I think, given the error message) getting called and that call making it to one of the first few lines of GitHubSCMSource#retrieve . I dunno. Punting to stephenconnolly =)

          Andrew Bayer added a comment -

          And note that this is with bleeding edge builds of master for every plugin and for core, not releases.

          Andrew Bayer added a comment - And note that this is with bleeding edge builds of master for every plugin and for core, not releases.

          R. Tyler Croy added a comment - - edited

          Seeing this with another instance which I've built this morning, using this Jenkinsfile:

          #!/usr/bin/env groovy
          
          pipeline {
              agent { label 'linux && docker' }
              options {
                  buildDiscarder(logRotator(numToKeepStr: '10'))
                  timeout(time: 3, unit: 'HOURS')
              }
          
              stages {
                  stage('Validate Terraform') {
                      steps {
                          echo 'Validating Terraform'
                          sh 'make validate'
                          echo 'Making sure we can generate our Kubernetes configurations'
                          sh 'make generate-k8s'
                      }
                  }
                  stage('Create builder') {
                      steps {
                          sh 'make builder'
                      }
                  }
                  stage('Build necessary plugins') {
                      when { branch 'master' }
                      steps {
                          sh 'make plugins'
                      }
                  }
                  stage('Create master container') {
                      when { branch 'master' }
                      steps {
                          sh 'make master'
                      }
                      post {
                          always {
                              archiveArtifacts artifacts: 'build/git-refs.txt', fingerprint: true
                          }
                      }
                  }
                  stage('Test') {
                      steps {
                          sh 'make check'
                      }
                  }
              }
              post {
                  always {
                      sh 'make clean'
                  }
              }
          }
          

          With these SHA1s:

          build/repos/apache-httpcomponents-client-4-api-plugin,b632272042139465076fa378e81f3fee729f885f
          build/repos/authentication-tokens-plugin,2a38368d4d9f61a6db0fde5b2f551293fc659cfe
          build/repos/azure-commons-plugin,b585a8f787f57e5552063db224e04a54454c1c87
          build/repos/azure-credentials-plugin,6ea5592773ad84f2f48854d1c611e1a529a22a9d
          build/repos/azure-vm-agents-plugin,e92e1955107b74be9536be2f57d1025966f9dde8
          build/repos/blueocean-autofavorite-plugin,0ddd2f7ea21c83d11a994ae569c56f970e7ba926
          build/repos/blueocean-display-url-plugin,d6515117712c9b30a32f32bb6e1a977711b9b037
          build/repos/blueocean-plugin,ca6f95350915a220492bdbf9af2c9e15b3c18f68
          build/repos/branch-api-plugin,661e921eef2977970585829a691cbc0595dffea1
          build/repos/cloudbees-bitbucket-branch-source-plugin,9d374b95493a05a3c878d8a9a13c901698ab33db
          build/repos/cloudbees-folder-plugin,afddb64d3704a3a7d2a87f8280f8748416b77b2d
          build/repos/cloud-stats-plugin,701ba1e7bf000e05fa5b436961f6125fbafa2e7c
          build/repos/credentials-binding-plugin,4d6f4c44baaa5aa1dd82a7b64ce4d8990a43d2fe
          build/repos/credentials-plugin,412ee9702029dc48bb7406fa9318ef8f2fc49b76
          build/repos/datadog-plugin,accf6ab44532c79613347416d5397c4156572cd4
          build/repos/display-url-api-plugin,e00542dea8bd0ee339863a11f5b61b872c3eacbd
          build/repos/docker-commons-plugin,b34c6f3ca4216eb66b09af6eef73db5f3c33adc2
          build/repos/docker-workflow-plugin,0e95d9409eb7d219dd9f4f37383c5321c630b889
          build/repos/durable-task-plugin,275ad653f71d89008ff061f6d8133d6684dcb1ad
          build/repos/embeddable-build-status-plugin,df555760b1a669a9acd558c5a498970bcf7febbe
          build/repos/favorite-plugin,08dcfd6b4fcb2b7c25efdf9ae995d0fedb8acdc2
          build/repos/git-client-plugin,a917e85812f18326fdaeb8af9ac06fa426ece5f0
          build/repos/github-api-plugin,182e264acf362cf973362988cdfb24cb74738fab
          build/repos/github-branch-source-plugin,74fd93e8f9dfaa99a8cdc9f348dbf3c4fa741563
          build/repos/github-oauth-plugin,5ad606efeedfd9ede0cbde31ec608ea1ba90ced1
          build/repos/github-plugin,68ceb5960549c6a5ce55c5288c7eaabbbb3719a2
          build/repos/git-plugin,3e210861c2642523d1339dbddfcd2f9ca94cc369
          build/repos/git-server-plugin,1762ba8ccf3c7a46a607b912736912b784a63524
          build/repos/htmlpublisher-plugin,f187d56a4cd3fdcba69d0abc9deaac5be072d2d5
          build/repos/jackson2-api-plugin,74355ea21dacf9e948724d10c9abca0e2c3296bf
          build/repos/jira-plugin,4c7dcde762b55beca74176ec5e837d1784a81ae7
          build/repos/jsch-plugin,dfa3a710c4827896269302503ae86ae026d202f0
          build/repos/js-libs-plugin,79ca191724036878a88f13325851af14b0c70452
          build/repos/junit-plugin,0061cf267c63f6bb64bc165f395e01c0c8c38fd5
          build/repos/kubernetes-plugin,ec7d74f8782941d2b146193ebb14ae6ef5da9a46
          build/repos/lockable-resources-plugin,14b1abaddf441da9a20e8b1e0d94844550529a1e
          build/repos/mailer-plugin,18b8274e1a31b60d7e20492f1ecfa39483b90b37
          build/repos/matrix-auth-plugin,9c859ed3ea932024e73f665400457cbf106b8dcf
          build/repos/matrix-project-plugin,8dc7208c1b0fcbdcb2de3fea24b4e77dbbeb4004
          build/repos/mercurial-plugin,f6e5f0bff2d8678c0e7cb13d0db031b45c7a0437
          build/repos/pipeline-build-step-plugin,4e115c5a18d0696e8d56b538f650e5e35757f755
          build/repos/pipeline-graph-analysis-plugin,7ea371dc90fef8e4f0627a365320fceec67f089f
          build/repos/pipeline-input-step-plugin,7aea2abf486438200cb1ae3fb553311d7ced11e0
          build/repos/pipeline-milestone-step-plugin,f7ef68f74aa1e923bb6ca3cdd10541fa6040f123
          build/repos/pipeline-model-definition-plugin,32c5e4178c89faca3ad184b4030d8f403425bdac
          build/repos/pipeline-stage-step-plugin,addb287b9d5b81f3d4ab3b15fb6dd33e7370062a
          build/repos/pipeline-stage-view-plugin,f5ce1f768457c7ff18408e6a54cb84f47a6a4ed8
          build/repos/plain-credentials-plugin,da51ba8703eefb201f3f6c4f4da3714fb83a37d6
          build/repos/pubsub-light-plugin,e374bc0248f37f6572e87913bf0c27cbf5b75d53
          build/repos/scm-api-plugin,45818a22f9d9846cfeb13d9c3f37d0b0dba15e04
          build/repos/script-security-plugin,38e6f6f7850b539a9a430d6ce5fd8c2e146a3181
          build/repos/sentry-plugin,a29347c83966fa0068a3c00af3fefcd9f19ed329
          build/repos/sse-gateway-plugin,685c6c709c96aa63f7d781c0df9060b9928e8b41
          build/repos/ssh-credentials-plugin,55e3d318eeddd52575b2ed632aa3841ba3b4834e
          build/repos/structs-plugin,eb9c1d5d5b1a9794925b62e17b9b4b1ee2113b13
          build/repos/token-macro-plugin,4d24aa5716f8d84e236824ab36d87c90a9897e8d
          build/repos/variant-plugin,3688261cf3c030b3eed603cf96e4758f79b569ac
          build/repos/workflow-aggregator-plugin,d67c39534f908f8432b44e588b0350b064e86bb5
          build/repos/workflow-api-plugin,bbab9280f0ce01988173ade26f8f349b14494499
          build/repos/workflow-basic-steps-plugin,3a464997109f0814f2399d15a2730e49ad74651c
          build/repos/workflow-cps-global-lib-plugin,aaa7ed1e04ce2ef751b2a770e71f0286c509ddc6
          build/repos/workflow-cps-plugin,31fec1fc61dca6565d7c55d1af4567cc3f3103c6
          build/repos/workflow-durable-task-step-plugin,603b62f65ac5796a80b5598685b34ac30a644885
          build/repos/workflow-job-plugin,f3f45712196c9bea60101dbc8b804f6309f69cf2
          build/repos/workflow-multibranch-plugin,c49261f827d032a637475071ba6742f0c40a8653
          build/repos/workflow-scm-step-plugin,b9e8530ca4173b499a17af0468deace17139d458
          build/repos/workflow-step-api-plugin,0b984e5df55b88c39efb9a649e226fba48f5cb8f
          build/repos/workflow-support-plugin,5ea4e1370ecbbfc83cfbf4a11da374e9fe5f7480
          

          Pipeline Run:

          Started by user R. Tyler Croy
          ERROR: Could not determine exact tip revision of master; falling back to nondeterministic checkout
          Checking out git https://github.com/CodeValet/codevalet.git into /var/jenkins_home/workspace/codevalet_master-XYZXXQZ6PUPRKWFNBQ2IUSQTCHKBRU7WI4WVS7PBL25JTIT4E6MQ@script to read Jenkinsfile
          Fetching changes from the remote Git repository
          Fetching without tags
          Checking out Revision 21f17763fa83ef76d76670159dff7d2cfac23dca (origin/master)
          Commit message: "Canary lives on its own host silly goose"
          Loading library pipeline-library@master
          Attempting to resolve master from remote references...
          Found match: refs/heads/master revision cc8272a3a18c24736625675fb0edd64622fff689
          Fetching changes from the remote Git repository
          Fetching without tags
          Checking out Revision cc8272a3a18c24736625675fb0edd64622fff689 (master)
          Commit message: "Don't do parallel by default until JENKINS-48571 is fixed"
          Loading library inline-pipeline-secrets@master
          Attempting to resolve master from remote references...
          Found match: refs/heads/master revision 0b1840825b47d0a207151c22d49259b42f208728
          Fetching changes from the remote Git repository
          Fetching without tags
          Checking out Revision 0b1840825b47d0a207151c22d49259b42f208728 (master)
          Commit message: "Here be dragons"
          [Pipeline] node
          Still waiting to schedule task
          Waiting for next available executor
          Running on docker-ubuntu4b4810 in /home/azureuser/workspace/workspace/codevalet_master-XYZXXQZ6PUPRKWFNBQ2IUSQTCHKBRU7WI4WVS7PBL25JTIT4E6MQ
          [Pipeline] {
          [Pipeline] timeout
          Timeout set to expire in 3 hr 0 min
          [Pipeline] {
          [Pipeline] stage
          [Pipeline] { (Validate Terraform)
          [Pipeline] echo
          Validating Terraform
          [Pipeline] sh
          [codevalet_master-XYZXXQZ6PUPRKWFNBQ2IUSQTCHKBRU7WI4WVS7PBL25JTIT4E6MQ] Running shell script
          + make validate
          make: *** No rule to make target 'validate'.  Stop.
          [Pipeline] }
          [Pipeline] // stage
          [Pipeline] stage
          [Pipeline] { (Create builder)
          Stage 'Create builder' skipped due to earlier failure(s)
          [Pipeline] }
          [Pipeline] // stage
          [Pipeline] stage
          [Pipeline] { (Build necessary plugins)
          Stage 'Build necessary plugins' skipped due to earlier failure(s)
          [Pipeline] }
          [Pipeline] // stage
          [Pipeline] stage
          [Pipeline] { (Create master container)
          Stage 'Create master container' skipped due to earlier failure(s)
          [Pipeline] }
          [Pipeline] // stage
          [Pipeline] stage
          [Pipeline] { (Test)
          Stage 'Test' skipped due to earlier failure(s)
          [Pipeline] }
          [Pipeline] // stage
          [Pipeline] stage
          [Pipeline] { (Declarative: Post Actions)
          [Pipeline] sh
          [codevalet_master-XYZXXQZ6PUPRKWFNBQ2IUSQTCHKBRU7WI4WVS7PBL25JTIT4E6MQ] Running shell script
          + make clean
          make: *** No rule to make target 'clean'.  Stop.
          [Pipeline] }
          [Pipeline] // stage
          [Pipeline] }
          [Pipeline] // timeout
          [Pipeline] }
          [Pipeline] // node
          [Pipeline] End of Pipeline
          ERROR: script returned exit code 2
          Finished: FAILURE
          

          R. Tyler Croy added a comment - - edited Seeing this with another instance which I've built this morning, using this Jenkinsfile: #!/usr/bin/env groovy pipeline { agent { label 'linux && docker' } options { buildDiscarder(logRotator(numToKeepStr: '10' )) timeout(time: 3, unit: 'HOURS' ) } stages { stage( 'Validate Terraform' ) { steps { echo 'Validating Terraform' sh 'make validate' echo 'Making sure we can generate our Kubernetes configurations' sh 'make generate-k8s' } } stage( 'Create builder' ) { steps { sh 'make builder' } } stage( 'Build necessary plugins' ) { when { branch 'master' } steps { sh 'make plugins' } } stage( 'Create master container' ) { when { branch 'master' } steps { sh 'make master' } post { always { archiveArtifacts artifacts: 'build/git-refs.txt' , fingerprint: true } } } stage( 'Test' ) { steps { sh 'make check' } } } post { always { sh 'make clean' } } } With these SHA1s: build/repos/apache-httpcomponents-client-4-api-plugin,b632272042139465076fa378e81f3fee729f885f build/repos/authentication-tokens-plugin,2a38368d4d9f61a6db0fde5b2f551293fc659cfe build/repos/azure-commons-plugin,b585a8f787f57e5552063db224e04a54454c1c87 build/repos/azure-credentials-plugin,6ea5592773ad84f2f48854d1c611e1a529a22a9d build/repos/azure-vm-agents-plugin,e92e1955107b74be9536be2f57d1025966f9dde8 build/repos/blueocean-autofavorite-plugin,0ddd2f7ea21c83d11a994ae569c56f970e7ba926 build/repos/blueocean-display-url-plugin,d6515117712c9b30a32f32bb6e1a977711b9b037 build/repos/blueocean-plugin,ca6f95350915a220492bdbf9af2c9e15b3c18f68 build/repos/branch-api-plugin,661e921eef2977970585829a691cbc0595dffea1 build/repos/cloudbees-bitbucket-branch-source-plugin,9d374b95493a05a3c878d8a9a13c901698ab33db build/repos/cloudbees-folder-plugin,afddb64d3704a3a7d2a87f8280f8748416b77b2d build/repos/cloud-stats-plugin,701ba1e7bf000e05fa5b436961f6125fbafa2e7c build/repos/credentials-binding-plugin,4d6f4c44baaa5aa1dd82a7b64ce4d8990a43d2fe build/repos/credentials-plugin,412ee9702029dc48bb7406fa9318ef8f2fc49b76 build/repos/datadog-plugin,accf6ab44532c79613347416d5397c4156572cd4 build/repos/display-url-api-plugin,e00542dea8bd0ee339863a11f5b61b872c3eacbd build/repos/docker-commons-plugin,b34c6f3ca4216eb66b09af6eef73db5f3c33adc2 build/repos/docker-workflow-plugin,0e95d9409eb7d219dd9f4f37383c5321c630b889 build/repos/durable-task-plugin,275ad653f71d89008ff061f6d8133d6684dcb1ad build/repos/embeddable-build-status-plugin,df555760b1a669a9acd558c5a498970bcf7febbe build/repos/favorite-plugin,08dcfd6b4fcb2b7c25efdf9ae995d0fedb8acdc2 build/repos/git-client-plugin,a917e85812f18326fdaeb8af9ac06fa426ece5f0 build/repos/github-api-plugin,182e264acf362cf973362988cdfb24cb74738fab build/repos/github-branch-source-plugin,74fd93e8f9dfaa99a8cdc9f348dbf3c4fa741563 build/repos/github-oauth-plugin,5ad606efeedfd9ede0cbde31ec608ea1ba90ced1 build/repos/github-plugin,68ceb5960549c6a5ce55c5288c7eaabbbb3719a2 build/repos/git-plugin,3e210861c2642523d1339dbddfcd2f9ca94cc369 build/repos/git-server-plugin,1762ba8ccf3c7a46a607b912736912b784a63524 build/repos/htmlpublisher-plugin,f187d56a4cd3fdcba69d0abc9deaac5be072d2d5 build/repos/jackson2-api-plugin,74355ea21dacf9e948724d10c9abca0e2c3296bf build/repos/jira-plugin,4c7dcde762b55beca74176ec5e837d1784a81ae7 build/repos/jsch-plugin,dfa3a710c4827896269302503ae86ae026d202f0 build/repos/js-libs-plugin,79ca191724036878a88f13325851af14b0c70452 build/repos/junit-plugin,0061cf267c63f6bb64bc165f395e01c0c8c38fd5 build/repos/kubernetes-plugin,ec7d74f8782941d2b146193ebb14ae6ef5da9a46 build/repos/lockable-resources-plugin,14b1abaddf441da9a20e8b1e0d94844550529a1e build/repos/mailer-plugin,18b8274e1a31b60d7e20492f1ecfa39483b90b37 build/repos/matrix-auth-plugin,9c859ed3ea932024e73f665400457cbf106b8dcf build/repos/matrix-project-plugin,8dc7208c1b0fcbdcb2de3fea24b4e77dbbeb4004 build/repos/mercurial-plugin,f6e5f0bff2d8678c0e7cb13d0db031b45c7a0437 build/repos/pipeline-build-step-plugin,4e115c5a18d0696e8d56b538f650e5e35757f755 build/repos/pipeline-graph-analysis-plugin,7ea371dc90fef8e4f0627a365320fceec67f089f build/repos/pipeline-input-step-plugin,7aea2abf486438200cb1ae3fb553311d7ced11e0 build/repos/pipeline-milestone-step-plugin,f7ef68f74aa1e923bb6ca3cdd10541fa6040f123 build/repos/pipeline-model-definition-plugin,32c5e4178c89faca3ad184b4030d8f403425bdac build/repos/pipeline-stage-step-plugin,addb287b9d5b81f3d4ab3b15fb6dd33e7370062a build/repos/pipeline-stage-view-plugin,f5ce1f768457c7ff18408e6a54cb84f47a6a4ed8 build/repos/plain-credentials-plugin,da51ba8703eefb201f3f6c4f4da3714fb83a37d6 build/repos/pubsub-light-plugin,e374bc0248f37f6572e87913bf0c27cbf5b75d53 build/repos/scm-api-plugin,45818a22f9d9846cfeb13d9c3f37d0b0dba15e04 build/repos/script-security-plugin,38e6f6f7850b539a9a430d6ce5fd8c2e146a3181 build/repos/sentry-plugin,a29347c83966fa0068a3c00af3fefcd9f19ed329 build/repos/sse-gateway-plugin,685c6c709c96aa63f7d781c0df9060b9928e8b41 build/repos/ssh-credentials-plugin,55e3d318eeddd52575b2ed632aa3841ba3b4834e build/repos/structs-plugin,eb9c1d5d5b1a9794925b62e17b9b4b1ee2113b13 build/repos/token-macro-plugin,4d24aa5716f8d84e236824ab36d87c90a9897e8d build/repos/variant-plugin,3688261cf3c030b3eed603cf96e4758f79b569ac build/repos/workflow-aggregator-plugin,d67c39534f908f8432b44e588b0350b064e86bb5 build/repos/workflow-api-plugin,bbab9280f0ce01988173ade26f8f349b14494499 build/repos/workflow-basic-steps-plugin,3a464997109f0814f2399d15a2730e49ad74651c build/repos/workflow-cps-global-lib-plugin,aaa7ed1e04ce2ef751b2a770e71f0286c509ddc6 build/repos/workflow-cps-plugin,31fec1fc61dca6565d7c55d1af4567cc3f3103c6 build/repos/workflow-durable-task-step-plugin,603b62f65ac5796a80b5598685b34ac30a644885 build/repos/workflow-job-plugin,f3f45712196c9bea60101dbc8b804f6309f69cf2 build/repos/workflow-multibranch-plugin,c49261f827d032a637475071ba6742f0c40a8653 build/repos/workflow-scm-step-plugin,b9e8530ca4173b499a17af0468deace17139d458 build/repos/workflow-step-api-plugin,0b984e5df55b88c39efb9a649e226fba48f5cb8f build/repos/workflow-support-plugin,5ea4e1370ecbbfc83cfbf4a11da374e9fe5f7480 Pipeline Run: Started by user R. Tyler Croy ERROR: Could not determine exact tip revision of master; falling back to nondeterministic checkout Checking out git https: //github.com/CodeValet/codevalet.git into / var /jenkins_home/workspace/codevalet_master-XYZXXQZ6PUPRKWFNBQ2IUSQTCHKBRU7WI4WVS7PBL25JTIT4E6MQ@script to read Jenkinsfile Fetching changes from the remote Git repository Fetching without tags Checking out Revision 21f17763fa83ef76d76670159dff7d2cfac23dca (origin/master) Commit message: "Canary lives on its own host silly goose" Loading library pipeline-library@master Attempting to resolve master from remote references... Found match: refs/heads/master revision cc8272a3a18c24736625675fb0edd64622fff689 Fetching changes from the remote Git repository Fetching without tags Checking out Revision cc8272a3a18c24736625675fb0edd64622fff689 (master) Commit message: "Don't do parallel by default until JENKINS-48571 is fixed" Loading library inline-pipeline-secrets@master Attempting to resolve master from remote references... Found match: refs/heads/master revision 0b1840825b47d0a207151c22d49259b42f208728 Fetching changes from the remote Git repository Fetching without tags Checking out Revision 0b1840825b47d0a207151c22d49259b42f208728 (master) Commit message: "Here be dragons" [Pipeline] node Still waiting to schedule task Waiting for next available executor Running on docker-ubuntu4b4810 in /home/azureuser/workspace/workspace/codevalet_master-XYZXXQZ6PUPRKWFNBQ2IUSQTCHKBRU7WI4WVS7PBL25JTIT4E6MQ [Pipeline] { [Pipeline] timeout Timeout set to expire in 3 hr 0 min [Pipeline] { [Pipeline] stage [Pipeline] { (Validate Terraform) [Pipeline] echo Validating Terraform [Pipeline] sh [codevalet_master-XYZXXQZ6PUPRKWFNBQ2IUSQTCHKBRU7WI4WVS7PBL25JTIT4E6MQ] Running shell script + make validate make: *** No rule to make target 'validate' . Stop. [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (Create builder) Stage 'Create builder' skipped due to earlier failure(s) [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (Build necessary plugins) Stage 'Build necessary plugins' skipped due to earlier failure(s) [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (Create master container) Stage 'Create master container' skipped due to earlier failure(s) [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (Test) Stage 'Test' skipped due to earlier failure(s) [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (Declarative: Post Actions) [Pipeline] sh [codevalet_master-XYZXXQZ6PUPRKWFNBQ2IUSQTCHKBRU7WI4WVS7PBL25JTIT4E6MQ] Running shell script + make clean make: *** No rule to make target 'clean' . Stop. [Pipeline] } [Pipeline] // stage [Pipeline] } [Pipeline] // timeout [Pipeline] } [Pipeline] // node [Pipeline] End of Pipeline ERROR: script returned exit code 2 Finished: FAILURE

          R. Tyler Croy added a comment - - edited

          To add some more data to this: I found when clicked "Scan Repository Now" the "master" branch (which I had triggered directly in the above comment), was triggered and started executing correctly

          Branch indexing
          Connecting to https://api.github.com using rtyler/****** (GitHub Access Token)
          Obtained Jenkinsfile from 21f17763fa83ef76d76670159dff7d2cfac23dca
          Loading library pipeline-library@master
          Attempting to resolve master from remote references...
          Found match: refs/heads/master revision cc8272a3a18c24736625675fb0edd64622fff689
          Fetching changes from the remote Git repository
          Fetching without tags
          Checking out Revision cc8272a3a18c24736625675fb0edd64622fff689 (master)
          Commit message: "Don't do parallel by default until JENKINS-48571 is fixed"
          Loading library inline-pipeline-secrets@master
          Attempting to resolve master from remote references...
          Found match: refs/heads/master revision 0b1840825b47d0a207151c22d49259b42f208728
          Fetching changes from the remote Git repository
          Fetching without tags
          Checking out Revision 0b1840825b47d0a207151c22d49259b42f208728 (master)
          Commit message: "Here be dragons"
          [Pipeline] node
          Still waiting to schedule task
          Waiting for next available executor
          Running on docker-ubuntued1f80 in /home/azureuser/workspace/workspace/codevalet_master-XYZXXQZ6PUPRKWFNBQ2IUSQTCHKBRU7WI4WVS7PBL25JTIT4E6MQ
          [Pipeline] {
          [Pipeline] stage
          [Pipeline] { (Declarative: Checkout SCM)
          [Pipeline] checkout
          Cloning the remote Git repository
          Cloning with configured refspecs honoured and without tags
          remote: Counting objects
          remote: Compressing objects
          Receiving objects
          Resolving deltas
          Updating references
          Fetching without tags
          Checking out Revision 21f17763fa83ef76d76670159dff7d2cfac23dca (master)
          Commit message: "Canary lives on its own host silly goose"
          

          R. Tyler Croy added a comment - - edited To add some more data to this: I found when clicked "Scan Repository Now" the "master" branch (which I had triggered directly in the above comment), was triggered and started executing correctly Branch indexing Connecting to https: //api.github.com using rtyler/****** (GitHub Access Token) Obtained Jenkinsfile from 21f17763fa83ef76d76670159dff7d2cfac23dca Loading library pipeline-library@master Attempting to resolve master from remote references... Found match: refs/heads/master revision cc8272a3a18c24736625675fb0edd64622fff689 Fetching changes from the remote Git repository Fetching without tags Checking out Revision cc8272a3a18c24736625675fb0edd64622fff689 (master) Commit message: "Don't do parallel by default until JENKINS-48571 is fixed" Loading library inline-pipeline-secrets@master Attempting to resolve master from remote references... Found match: refs/heads/master revision 0b1840825b47d0a207151c22d49259b42f208728 Fetching changes from the remote Git repository Fetching without tags Checking out Revision 0b1840825b47d0a207151c22d49259b42f208728 (master) Commit message: "Here be dragons" [Pipeline] node Still waiting to schedule task Waiting for next available executor Running on docker-ubuntued1f80 in /home/azureuser/workspace/workspace/codevalet_master-XYZXXQZ6PUPRKWFNBQ2IUSQTCHKBRU7WI4WVS7PBL25JTIT4E6MQ [Pipeline] { [Pipeline] stage [Pipeline] { (Declarative: Checkout SCM) [Pipeline] checkout Cloning the remote Git repository Cloning with configured refspecs honoured and without tags remote: Counting objects remote: Compressing objects Receiving objects Resolving deltas Updating references Fetching without tags Checking out Revision 21f17763fa83ef76d76670159dff7d2cfac23dca (master) Commit message: "Canary lives on its own host silly goose"

          R. Tyler Croy added a comment -

          Another data point which may or may not be useful, since I added the "Discover pull requests from forks" trait to the Multibranch Pipeline referenced in my previous comment, it has not exhibited this error.

          WHAT DOES IT ALL MEAN?!?

          R. Tyler Croy added a comment - Another data point which may or may not be useful, since I added the "Discover pull requests from forks" trait to the Multibranch Pipeline referenced in my previous comment, it has not exhibited this error. WHAT DOES IT ALL MEAN?!?

          R. Tyler Croy added a comment -

          Alright, I'm not sure the traits mean anything, but they do cause a re-scan of the organization.

          When I rescanned the organization with the joni project (from the original report), the branches all built correctly. After rescanning I can manually fire a Pipeline run and it will work properly.

          I'm confused AF.

          R. Tyler Croy added a comment - Alright, I'm not sure the traits mean anything, but they do cause a re-scan of the organization. When I rescanned the organization with the joni project (from the original report), the branches all built correctly. After rescanning I can manually fire a Pipeline run and it will work properly. I'm confused AF.

          Michael Neale added a comment -

          so the rescan fixed not the trait - just a coincidence perhaps? 

          Michael Neale added a comment - so the rescan fixed not the trait - just a coincidence perhaps? 

          R. Tyler Croy added a comment -

          Correct, a rescan has fixed a number of previously "Broken" Pipelines with this behavior.

          Since adding a new trait triggers a rescan, it's not surprising that it was the trait itself that didn't unbreak things.

          R. Tyler Croy added a comment - Correct, a rescan has fixed a number of previously "Broken" Pipelines with this behavior. Since adding a new trait triggers a rescan, it's not surprising that it was the trait itself that didn't unbreak things.

          Michael Neale added a comment -

          abayer any more useful stuff in the above that lets you reproduce this? (yes all on master of things). 

          Michael Neale added a comment - abayer any more useful stuff in the above that lets you reproduce this? (yes all on master of things). 

          Michael Neale added a comment -

          Any hot tips appreciated stephenconnolly

          Michael Neale added a comment - Any hot tips appreciated stephenconnolly

          R. Tyler Croy added a comment -

          I continue to see this, one potential correlated symptom that I am seeing is that I think I am only seeing this when I manually Run a Branch from within Blue Ocean.

          Since I don't have solid reproduction steps, it's hard to tell.

          R. Tyler Croy added a comment - I continue to see this, one potential correlated symptom that I am seeing is that I think I am only seeing this when I manually Run a Branch from within Blue Ocean. Since I don't have solid reproduction steps, it's hard to tell.

          R. Tyler Croy added a comment -

          i was incorrect, Blue Ocean doesn't have anything to do with it. I clicked 'Build Now' in the classic web UI and that caused the error again.

          It seems to consistently happen after restarting the Jenkins master. So between the master starting up and an organization being scanned.

          R. Tyler Croy added a comment - i was incorrect, Blue Ocean doesn't have anything to do with it. I clicked 'Build Now' in the classic web UI and that caused the error again. It seems to consistently happen after restarting the Jenkins master. So between the master starting up and an organization being scanned.

          Andrew Bayer added a comment -

          So it looks like the SCMSource#id field is getting reset on restarts, resulting in a "Build Now" trying to look up a SCMSource by an ID that no longer is relevant. And then it gets a NullSCMSource as a result. So...wut?

          Andrew Bayer added a comment - So it looks like the SCMSource#id field is getting reset on restarts, resulting in a "Build Now" trying to look up a SCMSource by an ID that no longer is relevant. And then it gets a NullSCMSource as a result. So...wut?

          Andrew Bayer added a comment -

          fwiw, this happens with git plugin 3.4.0 and later, but not 3.3.2 and earlier. Nothing seems to be setting the id field using the old value on restart any more. stephenconnolly - thoughts?

          Andrew Bayer added a comment - fwiw, this happens with git plugin 3.4.0 and later, but not 3.3.2 and earlier. Nothing seems to be setting the id field using the old value on restart any more. stephenconnolly - thoughts?

          Andrew Bayer added a comment -

          A little further detail - it passes if you explicitly set an id, but if you're falling back on the auto-generated UUID for the id, it fails.

          Andrew Bayer added a comment - A little further detail - it passes if you explicitly set an id , but if you're falling back on the auto-generated UUID for the id , it fails.

          Andrew Bayer added a comment -

          Ok, I think this is because the owner isn't saved after getId() is called, setting the id field to the UUID. Not sure what the right way to fix that is, so... stephenconnolly, all you.

          Andrew Bayer added a comment - Ok, I think this is because the owner isn't saved after getId() is called, setting the id field to the UUID. Not sure what the right way to fix that is, so... stephenconnolly , all you.

          Andrew Bayer added a comment -

          Ok, I may have a fix at https://github.com/jenkinsci/scm-api-plugin/pull/49. Needs review.

          Andrew Bayer added a comment - Ok, I may have a fix at https://github.com/jenkinsci/scm-api-plugin/pull/49 . Needs review.

          Is there any workaround for this? I'm getting this error every time I try to build my project.

          Can I do something to make it work while I wait for the fix?

          Here is the fragment where it fails for me, if it helps.

          pipeline {
          
            agent any
          
            tools {
               jdk 'jdk1.8'
               maven 'mvn3.5.2'
            }
          
          
            stage('Checkout') {
               steps{
                   checkout scm
               }
            }
          

          Thanks!

          Leandro Narosky added a comment - Is there any workaround for this? I'm getting this error every time I try to build my project. Can I do something to make it work while I wait for the fix? Here is the fragment where it fails for me, if it helps. pipeline { agent any tools { jdk 'jdk1.8' maven 'mvn3.5.2' } stage( 'Checkout' ) { steps{ checkout scm } } Thanks!

          Michael Neale added a comment -

          leoxs22 not sure of work arounds, although in the comments above abayer mentions an earlier version of the git plugin that you may be able to downgrade to (maybe...)

          Michael Neale added a comment - leoxs22 not sure of work arounds, although in the comments above abayer mentions an earlier version of the git plugin that you may be able to downgrade to (maybe...)

          Well, I'll try it.

          I can confirm that it works fine when making a full multi-branch scan of the repository.

          Leandro Narosky added a comment - Well, I'll try it. I can confirm that it works fine when making a full multi-branch scan of the repository.

          Tristan Lewis added a comment -

          The PR which worked around the issue got closed

          This issue is giving us a huge headache. We have a system of dozens of multibranch pipelines that are generated by a jobDSL script. Every time the jobs are re-seeded after updates to the jobDSL script, they all get into a broken state where none of them can successfully perform an SCM checkout step, manifesting with this "could not determine exact tip revision" error. I've had to create a utility job to force re-indexing all of the jobs. This resolves the jobs being in a broken state but due to how many multibranch pipeline jobs we have and how many branches each of them indexes, only about half the jobs get indexed before our git provider (BitBucket) starts rate-limiting us due to too many API requests are made from the branch indexing operations.

          Tristan Lewis added a comment - The PR which worked around the issue got closed This issue is giving us a huge headache. We have a system of dozens of multibranch pipelines that are generated by a jobDSL script. Every time the jobs are re-seeded after updates to the jobDSL script, they all get into a broken state where none of them can successfully perform an SCM checkout step, manifesting with this "could not determine exact tip revision" error. I've had to create a utility job to force re-indexing all of the jobs. This resolves the jobs being in a broken state but due to how many multibranch pipeline jobs we have and how many branches each of them indexes, only about half the jobs get indexed before our git provider (BitBucket) starts rate-limiting us due to too many API requests are made from the branch indexing operations.

          Alex Suter added a comment -

          This problem is really annoying. After jenkins master restart, we have to rescan all builds, which builds all branches! And we quite often restart jenkins to update jenkins itself and also the system on which jenkins is running. The closed PR would at least solve the problem for the moment. In the meantime, can we work on a better solution?

          Alex Suter added a comment - This problem is really annoying. After jenkins master restart, we have to rescan all builds, which builds all branches! And we quite often restart jenkins to update jenkins itself and also the system on which jenkins is running. The closed PR would at least solve the problem for the moment. In the meantime, can we work on a better solution?

          Ygor Almeida added a comment - - edited

          I'm in the same boat, alexsuter. My team is already talking about a possible migration to GoCD. Like you, we are always updating Jenkins and its plugins. This issue is causing a series of unexpected deployments. Anyways, total chaos in my environment right now. At this point, even a workaround is very welcome.

          Ygor Almeida added a comment - - edited I'm in the same boat, alexsuter . My team is already talking about a possible migration to GoCD. Like you, we are always updating Jenkins and its plugins. This issue is causing a series of unexpected deployments. Anyways, total chaos in my environment right now. At this point, even a workaround is very welcome.

          Michael Neale added a comment -

          ugh this sounds bad. I have a server that I update every day and have not seen this, but tyler has similar and has.. 

          Michael Neale added a comment - ugh this sounds bad. I have a server that I update every day and have not seen this, but tyler has similar and has.. 

          So I suspect the affected users are not configuring jobs through the UI.

          When I configure the job via the UI, then the ID gets assigned correctly.

          So unless somebody can show otherwise, this smells a lot like "user error" (not necessarily that it is the user's fault for the error mind)

          Case in point, I took my own Jenkins and created a fresh multibranch project and added a GitSCMSource via the UI and saved =>

          I inspected the config.xml on-disk manually and we have:

            <sources class="jenkins.branch.MultiBranchProject$BranchSourceList" plugin="branch-api@2.0.18">
              <data>
                <jenkins.branch.BranchSource>
                  <source class="jenkins.plugins.git.GitSCMSource" plugin="git@3.7.0">
                    <id>d3e70531-9f4d-4a7b-972f-339296b80997</id>
                    <remote></remote>
                    <credentialsId></credentialsId>
                    <traits>
                      <jenkins.plugins.git.traits.BranchDiscoveryTrait/>
                    </traits>
                  </source>
                  <strategy class="jenkins.branch.DefaultBranchPropertyStrategy">
                    <properties class="empty-list"/>
                  </strategy>
                </jenkins.branch.BranchSource>
              </data>
              <owner class="org.jenkinsci.plugins.workflow.multibranch.WorkflowMultiBranchProject" reference="../.."/>
            </sources>
          

          which matches exactly the id that is in memory.

          My initial suspect would be code like the job-dsl plugin. If the user was relying on an accidental side-effect that resulted in the id getting queried before the save, then that would explain things.

          As I understand, CodeValet uses some automatic configuration mechanism to define jobs... looks like that mechanism is not assigning an id before configuring the list of BranchSources.

          Now in https://github.com/jenkinsci/scm-api-plugin/blob/master/docs/consumer.adoc#scmsourceowner-contract we have

          Ensure that SCMSource.setOwner(owner) has been called before any SCMSource instance is returned from either SCMSourceOwner.getSCMSources() or SCMSourceOwner.getSCMSource(id).

          We should probably make more a explicit part of the contract of a SCMSourceOwner that all SCMSource instances that are added to the owner must have an id assigned before ownership is set (or we can fall back to ensuring an id has been assigned as a (minimal) side-effect of calling SCMSource.setOwner(owner).

          That would minimize the issue for users, but probably should be a separate ticket.

          I also thought I had documented somewhere (but I cannot find where... and https://github.com/jenkinsci/branch-api-plugin/blob/master/docs/user.adoc seems considerably more anemic than I thought I had left it) that it was critical - if using stuff like JobDSL - that you must assign an id...

          Hmmm https://github.com/jenkinsci/scm-api-plugin/blob/master/docs/implementation.adoc#implementing-jenkinsscmapiscmsource has a note on IDs...

          SCMSource IDs
          The SCMSource’s IDs are used to help track the SCMSource that a SCMHead instance originated from.

          If - and only if - you are certain that you can construct a definitive ID from the configuration details of your SCMSource then implementations are encouraged to use a computed ID.

          When instantiating an SCMSource from a SCMNavigator the navigator is responsible for assigning IDs such that two observations of the same source will always have the same ID.

          In all other cases, implementations should use the default generated ID mechanism when the ID supplied to the constructor is null.

          An example of how a generated ID could be definitively constructed would be:

          Start with the definitive URL of the server including the port

          Append the name of the source

          Append a SHA-1 hash of the other configuration options (this is because users can add the same source with different configuration options)

          If users add the same source with the same configuration options twice to the same owner, with the above ID generation scheme, it should not matter as both sources would be idempotent.

          By starting with the server URL and then appending the name of the source we might be able to more quickly route events.

          The observant reader will spot the issue above, namely that we need to start from an URL that is definitive. Most SCM systems can be accessed via multiple URLs. For example, GitHub can be accessed at both https://github.com/ and https://github.com./. For internal source control systems, this can get even more complex as some users may configure using the IP address, some may configure using a hostname without a domain, some may configure using a fully qualified hostname…​ also ID generation should not require a network connection or any external I/O.

          But that was not the note I thought I wrote.

          Stephen Connolly added a comment - So I suspect the affected users are not configuring jobs through the UI. When I configure the job via the UI, then the ID gets assigned correctly. So unless somebody can show otherwise, this smells a lot like "user error" (not necessarily that it is the user's fault for the error mind) Case in point, I took my own Jenkins and created a fresh multibranch project and added a GitSCMSource via the UI and saved => I inspected the config.xml on-disk manually and we have: <sources class= "jenkins.branch.MultiBranchProject$BranchSourceList" plugin= "branch-api@2.0.18" > <data> <jenkins.branch.BranchSource> <source class= "jenkins.plugins.git.GitSCMSource" plugin= "git@3.7.0" > <id> d3e70531-9f4d-4a7b-972f-339296b80997 </id> <remote> </remote> <credentialsId> </credentialsId> <traits> <jenkins.plugins.git.traits.BranchDiscoveryTrait/> </traits> </source> <strategy class= "jenkins.branch.DefaultBranchPropertyStrategy" > <properties class= "empty-list" /> </strategy> </jenkins.branch.BranchSource> </data> <owner class= "org.jenkinsci.plugins.workflow.multibranch.WorkflowMultiBranchProject" reference= "../.." /> </sources> which matches exactly the id that is in memory. My initial suspect would be code like the job-dsl plugin. If the user was relying on an accidental side-effect that resulted in the id getting queried before the save, then that would explain things. As I understand, CodeValet uses some automatic configuration mechanism to define jobs... looks like that mechanism is not assigning an id before configuring the list of BranchSources. Now in https://github.com/jenkinsci/scm-api-plugin/blob/master/docs/consumer.adoc#scmsourceowner-contract we have Ensure that SCMSource.setOwner(owner) has been called before any SCMSource instance is returned from either SCMSourceOwner.getSCMSources() or SCMSourceOwner.getSCMSource(id) . We should probably make more a explicit part of the contract of a SCMSourceOwner that all SCMSource instances that are added to the owner must have an id assigned before ownership is set (or we can fall back to ensuring an id has been assigned as a (minimal) side-effect of calling SCMSource.setOwner(owner) . That would minimize the issue for users, but probably should be a separate ticket. I also thought I had documented somewhere (but I cannot find where... and https://github.com/jenkinsci/branch-api-plugin/blob/master/docs/user.adoc seems considerably more anemic than I thought I had left it) that it was critical - if using stuff like JobDSL - that you must assign an id... Hmmm https://github.com/jenkinsci/scm-api-plugin/blob/master/docs/implementation.adoc#implementing-jenkinsscmapiscmsource has a note on IDs... SCMSource IDs The SCMSource’s IDs are used to help track the SCMSource that a SCMHead instance originated from. If - and only if - you are certain that you can construct a definitive ID from the configuration details of your SCMSource then implementations are encouraged to use a computed ID. When instantiating an SCMSource from a SCMNavigator the navigator is responsible for assigning IDs such that two observations of the same source will always have the same ID. In all other cases, implementations should use the default generated ID mechanism when the ID supplied to the constructor is null. An example of how a generated ID could be definitively constructed would be: Start with the definitive URL of the server including the port Append the name of the source Append a SHA-1 hash of the other configuration options (this is because users can add the same source with different configuration options) If users add the same source with the same configuration options twice to the same owner, with the above ID generation scheme, it should not matter as both sources would be idempotent. By starting with the server URL and then appending the name of the source we might be able to more quickly route events. The observant reader will spot the issue above, namely that we need to start from an URL that is definitive. Most SCM systems can be accessed via multiple URLs. For example, GitHub can be accessed at both https://github.com/ and https://github.com./ . For internal source control systems, this can get even more complex as some users may configure using the IP address, some may configure using a hostname without a domain, some may configure using a fully qualified hostname…​ also ID generation should not require a network connection or any external I/O. But that was not the note I thought I wrote.

          Action Items

          • rtyler Can you provide details as to how you create the multibranch projects, specifically the code snippet where you set the sources on the multibranch project.
          • leoxs22 / tr1z / ygorth Can you confirm/deny whether you are using something like the JobDSL or other non-UI mechanism to create the affected multibranch projects. (if my theory is correct, the solution is completely in your hands by fixing your scripts to assign a non-null ID. The ID can be anything, e.g. dummy, as long as it is unique within that SCMSourceOwner. The purpose of the ID is to allow determining branch take-over in the case where multiple sources have been configured in the multi-branch project, which results from the original vision that things like pull requests would be discovered by using a second source rather than having the primary source discover them through traits.
          • abayer The correct hack fix, if we need a fix (see note), should be in branch-api, we should be able to ensure that we call getId() on all sources before the save() which would prevent the issue on behalf of users that are unaware of the requirements to assign IDs.
            • There are two ways you could achieve this: 1. BranchSource's constructors could just call getId(). 2. MultibranchProject's save() could just iterate all the sources calling getId(). I'll let you decide whether you want to do one or both of these. With this done, you should have the minimum fix.
            • You may want to consider adding a reflection based pre-emptive fix for the special case where there is one and only one source without an ID.
            • Basically, in onLoad you could look at the sources and by reflection peek at id if exactly one of those inspected id}}s is {{null then you can look at all the child branches and see if there is exactly one unknown id among all the child projects (and that id should not be the special "dead branch" id)... if that set of conditions is met then you can assign the discovered id and trigger a save()
            • The above could be a lot of work, but it would benefit users as anyone affected by this issue and in the majority case of exactly one source would get their jobs fixed automatically on restart.
            • The above would be the maximal fix, which would be nicer in that it fixes things for users.
            • NOTE: rtyler / leoxs22 / tr1z / ygorth if abayer implements this then you will get build storms every time you reconfigure the project using whatever mechanism you are using to update the job configuration.

          Assuming my theory is correct, and you are overwriting the sources periodically. Since the sources you are overwriting with do not have an id assigned, we will keep assigning new ones, thus all the branches will be rebuilt as there was a "takeover"... but the events will be picked up correctly.

          This is the reason why I did not like the fix that abayer attempted in https://github.com/jenkinsci/scm-api-plugin/pull/49 although I could not articulate it properly at the time.

          • stephenconnolly Create a ticket to update scm-api so that SCMSource.setOwner(owner) will ensure that an ID has been assigned.
          • stephenconnolly Finish the user docs for branch-api
          • abayer / stephenconnolly / michaelneale to decide whether we actually want to risk build storms by trying to ensure IDs have been assigned.

          Stephen Connolly added a comment - Action Items rtyler Can you provide details as to how you create the multibranch projects, specifically the code snippet where you set the sources on the multibranch project. leoxs22 / tr1z / ygorth Can you confirm/deny whether you are using something like the JobDSL or other non-UI mechanism to create the affected multibranch projects. (if my theory is correct, the solution is completely in your hands by fixing your scripts to assign a non-null ID. The ID can be anything, e.g. dummy , as long as it is unique within that SCMSourceOwner . The purpose of the ID is to allow determining branch take-over in the case where multiple sources have been configured in the multi-branch project, which results from the original vision that things like pull requests would be discovered by using a second source rather than having the primary source discover them through traits. abayer The correct hack fix, if we need a fix (see note), should be in branch-api, we should be able to ensure that we call getId() on all sources before the save() which would prevent the issue on behalf of users that are unaware of the requirements to assign IDs. There are two ways you could achieve this: 1. BranchSource 's constructors could just call getId() . 2. MultibranchProject 's save()  could just iterate all the sources calling getId() . I'll let you decide whether you want to do one or both of these. With this done, you should have the minimum fix. You may want to consider adding a reflection based pre-emptive fix for the special case where there is one and only one source without an ID. Basically, in onLoad you could look at the sources and by reflection peek at id if exactly one of those inspected id}}s is {{null then you can look at all the child branches and see if there is exactly one unknown  id  among all the child projects (and that id should not be the special "dead branch" id)... if that set of conditions is met then you can assign the discovered id and trigger a save() The above could be a lot of work, but it would benefit users as anyone affected by this issue and in the majority case of exactly one source would get their jobs fixed automatically on restart. The above would be the maximal fix, which would be nicer in that it fixes things for users. NOTE: rtyler  /  leoxs22  /  tr1z  /  ygorth if abayer implements this then you will get build storms every time you reconfigure the project using whatever mechanism you are using to update the job configuration. Assuming my theory is correct, and you are overwriting the sources periodically. Since the sources you are overwriting with do not have an id assigned, we will keep assigning new ones, thus all the branches will be rebuilt as there was a "takeover"... but the events will be picked up correctly. This is the reason why I did not like the fix that abayer attempted in https://github.com/jenkinsci/scm-api-plugin/pull/49  although I could not articulate it properly at the time. stephenconnolly Create a ticket to update scm-api so that SCMSource.setOwner(owner) will ensure that an ID has been assigned. stephenconnolly Finish the user docs for branch-api abayer / stephenconnolly / michaelneale to decide whether we actually want to risk build storms by trying to ensure IDs have been assigned.

          I have created JENKINS-49610 and assigned it to abayer.

          Stephen Connolly added a comment - I have created  JENKINS-49610 and assigned it to abayer .

          R. Tyler Croy added a comment -

          stephenconnolly none of the Multibranch Pipelines I configure are "automated." They are however largely GitHub Organization Folders (see also).

          R. Tyler Croy added a comment - stephenconnolly none of the Multibranch Pipelines I configure are "automated." They are however largely GitHub Organization Folders ( see also ).

          Alex Suter added a comment - - edited

          Hi stephenconnolly

          I always create my multibranch build pipelines the following way:

          1. Blue Ocean UI
          2. Create new pipeline
          3. Choose Git
          4. Enter a git repository url from Bitbucket
          5. Create

          So I always use the UI. Please let me know, if I can give any other informations. In my environement it is always reproducibale. (And I always use declarative pipelines)

          Alex Suter added a comment - - edited Hi stephenconnolly I always create my multibranch build pipelines the following way: Blue Ocean UI Create new pipeline Choose Git Enter a git repository url from Bitbucket Create So I always use the UI. Please let me know, if I can give any other informations. In my environement it is always reproducibale. (And I always use declarative pipelines)

          So looking at https://github.com/jenkinsci/blueocean-plugin/blob/master/blueocean-git-pipeline/src/main/java/io/jenkins/blueocean/blueocean_git_pipeline/GitPipelineCreateRequest.java

          I note that it appears BlueOcean does not provide an ID when creating jobs (and BlueOcean bypasses the classic UI screen, so should be responsible for setting an ID)

          In the case of GitHub, BlueOcean is providing an ID: https://github.com/jenkinsci/blueocean-plugin/blob/master/blueocean-github-pipeline/src/main/java/io/jenkins/blueocean/blueocean_github_pipeline/GithubPipelineCreateRequest.java

          (I cannot get line links from my phone, so I can only point at the files and I may be misreading on a small screen)

          rtyler by any chance are you creating the jobs through BlueOcean?

          @all is it only GitSCMSource that is affected?

          Stephen Connolly added a comment - So looking at https://github.com/jenkinsci/blueocean-plugin/blob/master/blueocean-git-pipeline/src/main/java/io/jenkins/blueocean/blueocean_git_pipeline/GitPipelineCreateRequest.java I note that it appears BlueOcean does not provide an ID when creating jobs (and BlueOcean bypasses the classic UI screen, so should be responsible for setting an ID) In the case of GitHub, BlueOcean is providing an ID: https://github.com/jenkinsci/blueocean-plugin/blob/master/blueocean-github-pipeline/src/main/java/io/jenkins/blueocean/blueocean_github_pipeline/GithubPipelineCreateRequest.java (I cannot get line links from my phone, so I can only point at the files and I may be misreading on a small screen) rtyler by any chance are you creating the jobs through BlueOcean? @all is it only GitSCMSource that is affected?

          So this is how BlueOcean creates its multibranch projects: https://github.com/jenkinsci/blueocean-plugin/blob/efb9de930e73454ebcda7625f168a426bc04f416/blueocean-pipeline-scm-api/src/main/java/io/jenkins/blueocean/scm/api/AbstractMultiBranchCreateRequest.java#L77-L78

          Now this should only show up on Multibranch projects. If it is an org folder then the SCMNavigator is supposed to be assigning IDs based on a strict formula, e.g. https://github.com/jenkinsci/github-branch-source-plugin/blob/d60cc7617ee9ad56fd3ea3a3c3ad2569dc07c827/src/main/java/org/jenkinsci/plugins/github_branch_source/GitHubSCMNavigator.java#L1560-L1564 and https://github.com/jenkinsci/bitbucket-branch-source-plugin/blob/9f5551b9c05e3bb51c9046204f8871157804401b/src/main/java/com/cloudbees/jenkins/plugins/bitbucket/BitbucketSCMNavigator.java#L874-L883

          My analysis

          There are two separate issues here:

          1. The case of Multibranch Projects created by BlueOcean, in these cases BlueOcean is not assigning an ID and as a result, until the job has been reconfigured in the classic UI (should be sufficient to just open & save the job) the job will have the issue on every restart. IOW I claim a workaround of open and resave the multibranch project in classic UI. Please demonstrate otherwise.
          2. The case of Job DSL plugin being used incorrectly (no blame, just a statement of fact). This should be fixable by users assigning a static ID in the job definition. IOW I claim not a defect - at least once JENKINS-49610 has documented this as being a requirement.

          We need to identify if there are any other issues being lumped in.

          vivek BlueOcean should be assigning an ID to all SCMSource instances it creates... I think blueocean would be the perfect ID to set. Can probably just do that by changing: https://github.com/jenkinsci/blueocean-plugin/blob/efb9de930e73454ebcda7625f168a426bc04f416/blueocean-pipeline-scm-api/src/main/java/io/jenkins/blueocean/scm/api/AbstractMultiBranchCreateRequest.java#L77 from

          SCMSource source = createSource(project, scmConfig);
          

          to

          SCMSource source = createSource(project, scmConfig).withId("blueocean");
          

          Stephen Connolly added a comment - So this is how BlueOcean creates its multibranch projects: https://github.com/jenkinsci/blueocean-plugin/blob/efb9de930e73454ebcda7625f168a426bc04f416/blueocean-pipeline-scm-api/src/main/java/io/jenkins/blueocean/scm/api/AbstractMultiBranchCreateRequest.java#L77-L78 In the case of GitSCMSource, no ID is assigned: https://github.com/jenkinsci/blueocean-plugin/blob/efb9de930e73454ebcda7625f168a426bc04f416/blueocean-git-pipeline/src/main/java/io/jenkins/blueocean/blueocean_git_pipeline/GitPipelineCreateRequest.java#L35-L44 In the case of GitHubSCMSource, we have https://github.com/jenkinsci/blueocean-plugin/blob/efb9de930e73454ebcda7625f168a426bc04f416/blueocean-github-pipeline/src/main/java/io/jenkins/blueocean/blueocean_github_pipeline/GithubPipelineCreateRequest.java#L51-L70 which would appear not to assign an ID (notice the null on this line: https://github.com/jenkinsci/blueocean-plugin/blob/efb9de930e73454ebcda7625f168a426bc04f416/blueocean-github-pipeline/src/main/java/io/jenkins/blueocean/blueocean_github_pipeline/GithubPipelineCreateRequest.java#L60 that should not be null )... let's check if anything else is helping us out along the code paths: https://github.com/jenkinsci/github-branch-source-plugin/blob/d60cc7617ee9ad56fd3ea3a3c3ad2569dc07c827/src/main/java/org/jenkinsci/plugins/github_branch_source/GitHubSCMSourceBuilder.java#L120-L127 nope, nothing there... https://github.com/jenkinsci/scm-api-plugin/blob/c63ce5d6406d48f3101f6ef3937e402e0bd0b3bf/src/main/java/jenkins/scm/api/SCMSource.java#L160-L166 nope nothing there either Bitbucket is the same: https://github.com/jenkinsci/scm-api-plugin/blob/c63ce5d6406d48f3101f6ef3937e402e0bd0b3bf/src/main/java/jenkins/scm/api/SCMSource.java#L160-L166 and again no id assigned https://github.com/jenkinsci/blueocean-plugin/blob/efb9de930e73454ebcda7625f168a426bc04f416/blueocean-bitbucket-pipeline/src/main/java/io/jenkins/blueocean/blueocean_bitbucket_pipeline/BitbucketPipelineCreateRequest.java#L64 Now this should only show up on Multibranch projects. If it is an org folder then the SCMNavigator is supposed to be assigning IDs based on a strict formula, e.g. https://github.com/jenkinsci/github-branch-source-plugin/blob/d60cc7617ee9ad56fd3ea3a3c3ad2569dc07c827/src/main/java/org/jenkinsci/plugins/github_branch_source/GitHubSCMNavigator.java#L1560-L1564 and https://github.com/jenkinsci/bitbucket-branch-source-plugin/blob/9f5551b9c05e3bb51c9046204f8871157804401b/src/main/java/com/cloudbees/jenkins/plugins/bitbucket/BitbucketSCMNavigator.java#L874-L883 My analysis There are two separate issues here: The case of Multibranch Projects created by BlueOcean, in these cases BlueOcean is not assigning an ID and as a result, until the job has been reconfigured in the classic UI (should be sufficient to just open & save the job) the job will have the issue on every restart. IOW I claim a workaround of open and resave the multibranch project in classic UI . Please demonstrate otherwise. The case of Job DSL plugin being used incorrectly (no blame, just a statement of fact). This should be fixable by users assigning a static ID in the job definition. IOW I claim not a defect - at least once JENKINS-49610 has documented this as being a requirement. We need to identify if there are any other issues being lumped in. vivek BlueOcean should be assigning an ID to all SCMSource instances it creates... I think blueocean would be the perfect ID to set. Can probably just do that by changing: https://github.com/jenkinsci/blueocean-plugin/blob/efb9de930e73454ebcda7625f168a426bc04f416/blueocean-pipeline-scm-api/src/main/java/io/jenkins/blueocean/scm/api/AbstractMultiBranchCreateRequest.java#L77 from SCMSource source = createSource(project, scmConfig); to SCMSource source = createSource(project, scmConfig).withId( "blueocean" );

          I think JENKINS-46290 is demonstrating the same issue as the Job DSL half of this. abayer if you need to replicate a JobDSL configuration that is leaving the SCMSource.id == null I believe https://github.com/samrocketman/jervis/blob/b25af324cce229255fd34c9070f32da4d0d8b393/jobs/jenkins_job_multibranch_pipeline.groovy#L35-L62 is such an example. The fix to that should just be adding id 'some-value' within the github section.

          Stephen Connolly added a comment - I think JENKINS-46290 is demonstrating the same issue as the Job DSL half of this. abayer if you need to replicate a JobDSL configuration that is leaving the SCMSource.id == null I believe https://github.com/samrocketman/jervis/blob/b25af324cce229255fd34c9070f32da4d0d8b393/jobs/jenkins_job_multibranch_pipeline.groovy#L35-L62 is such an example. The fix to that should just be adding id 'some-value' within the github section.

          Alex Suter added a comment -

          I can confirm that when I save the multibranch build pipeline in the traditional jenkins ui (changing the description), the problem does not occur anymore after restart of jenkins.

          Alex Suter added a comment - I can confirm that when I save the multibranch build pipeline in the traditional jenkins ui (changing the description), the problem does not occur anymore after restart of jenkins.

          alexsuter you shouldn’t even need to change the description. Just clicking “Save” on the classic UI screen should suffice if BlueOcean created the job.

          You will need to repeat if BlueOcean updates the job config... and at that point you will get a rebuild storm (because BlueOcean is not round-tripping the ID)

          Stephen Connolly added a comment - alexsuter you shouldn’t even need to change the description. Just clicking “Save” on the classic UI screen should suffice if BlueOcean created the job. You will need to repeat if BlueOcean updates the job config... and at that point you will get a rebuild storm (because BlueOcean is not round-tripping the ID)

          vivek so thinking on this some more, my suggested simple fix for BlueOcean actually needs to be slightly more complex. There are already a significant number of exist jobs that were created by BlueOcean and either have a null id on disk (and are suffering from this issue) or have a non-null id on disk.

          If BlueOcean creates a new job, the simple fix is fine.

          If BlueOcean updates a job, it needs to round-trip the existing id if and only if the SCMSource type remains the same, otherwise it will trigger a rebuild storm on restart. To be clear, the rebuild storm on restart is an issue right now with BlueOcean even if using a version of the git plugin that ensures a non-null id in the constructor.

          So irrespective of everything else, BlueOcean needs to fix the round tripping of IDs during a configuration update... I claim BlueOcean is supposed to assign an ID during initial creation, but if you feel you have a strong argument to counter I am happy to hear it

          Stephen Connolly added a comment - vivek so thinking on this some more, my suggested simple fix for BlueOcean actually needs to be slightly more complex. There are already a significant number of exist jobs that were created by BlueOcean and either have a null id on disk (and are suffering from this issue) or have a non-null id on disk. If BlueOcean creates a new job, the simple fix is fine. If BlueOcean updates a job, it needs to round-trip the existing id if and only if the SCMSource type remains the same, otherwise it will trigger a rebuild storm on restart. To be clear, the rebuild storm on restart is an issue right now with BlueOcean even if using a version of the git plugin that ensures a non-null id in the constructor. So irrespective of everything else, BlueOcean needs to fix the round tripping of IDs during a configuration update... I claim BlueOcean is supposed to assign an ID during initial creation, but if you feel you have a strong argument to counter I am happy to hear it

          Michael Neale added a comment - - edited

          stephenconnolly  vivek I think open/save from classic is a perfectly fine work around for blue users. If blue is patched so new created pipelines are saved correctly, I still think it is ok to apply that fix/work around of open and save for older pipelines. 

          Whilst this issue was flagged originally by Tyler with codevalet and blue ocean creation, it seems most comments are from jobDSL users (who I think can fix it in the scripts?). So perhaps all that is needed is for blue ocean to do the right thing for new pipelines (older ones can be worked around), and perhaps some warning for jobDSL users that they need to set an id? 

          Also - nice sleuthing! this is a tricky one. 

           

          Michael Neale added a comment - - edited stephenconnolly   vivek I think open/save from classic is a perfectly fine work around for blue users. If blue is patched so new created pipelines are saved correctly, I still think it is ok to apply that fix/work around of open and save for older pipelines.  Whilst this issue was flagged originally by Tyler with codevalet and blue ocean creation, it seems most comments are from jobDSL users (who I think can fix it in the scripts?). So perhaps all that is needed is for blue ocean to do the right thing for new pipelines (older ones can be worked around), and perhaps some warning for jobDSL users that they need to set an id?  Also - nice sleuthing! this is a tricky one.   

          Vivek Pandey added a comment -

          >If BlueOcean updates a job, it needs to round-trip the existing id if and only if the SCMSource type remains the same, otherwise it will trigger a rebuild storm on restart.

          BlueOcean doesn't update a job. Even using API, if they try to create a new job for a gihub/bitbucket/git repo, it errors out if there is job with same name exists. Once user creates a blueocean pipeline job, at most they can do is to trigger re-indexing. So a simple solution of creating a blueocean specific id should be ok. 

          Vivek Pandey added a comment - >If BlueOcean  updates  a job, it needs to round-trip the existing id  if and only if  the SCMSource type remains the same, otherwise it will trigger a rebuild storm on restart. BlueOcean doesn't update a job. Even using API, if they try to create a new job for a gihub/bitbucket/git repo, it errors out if there is job with same name exists. Once user creates a blueocean pipeline job, at most they can do is to trigger re-indexing. So a simple solution of creating a blueocean specific id should be ok. 

          Vivek Pandey added a comment -

          BlueOcean PR: https://github.com/jenkinsci/blueocean-plugin/pull/1662. Its been merged to master.

          Vivek Pandey added a comment - BlueOcean PR: https://github.com/jenkinsci/blueocean-plugin/pull/1662 . Its been merged to master.

          Michael Neale added a comment -

          leoxs22 tr1z alexsuter ygorth

          Sorry for the drawn out things. 

          For users of JobDSL the solution/work around is to always set an id for the SCM, something like: 

          source {
           github {
           //github
           id "owner-${project_folder}:repo-${project_name}"

          This will then correct the behavior. Does this work for you? (similarly for other SCMs)

          If the jobs were created via some other way, then opening the config and saving it (no change needed) will fix it, and blue ocean has been patched to set the id correctly. 

           

          See sag47's last comment here: https://issues.jenkins-ci.org/browse/JENKINS-46290?focusedCommentId=329162&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-329162 and his fix here: https://github.com/samrocketman/jervis/commit/e4cd6324ff22c3593d7e6feab88dff79e516e14b for an example of a jobDSL using it. 

          Michael Neale added a comment - leoxs22 tr1z alexsuter ygorth Sorry for the drawn out things.  For users of JobDSL the solution/work around is to always set an id for the SCM, something like:  source { github { //github id "owner-${project_folder}:repo-${project_name}" This will then correct the behavior. Does this work for you? (similarly for other SCMs) If the jobs were created via some other way, then opening the config and saving it (no change needed) will fix it, and blue ocean has been patched to set the id correctly.    See sag47 's last comment here: https://issues.jenkins-ci.org/browse/JENKINS-46290?focusedCommentId=329162&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-329162  and his fix here: https://github.com/samrocketman/jervis/commit/e4cd6324ff22c3593d7e6feab88dff79e516e14b  for an example of a jobDSL using it. 

          Michael Neale added a comment -

          rtyler so far this looks like jobs created by blue ocean (before the fix) and jobDSLs without an id set get this, but as for the github organisation folder - are you seeing it there ? (it may have a related issue)

           

          stephenconnolly do you know if github organisation folders could be bitten by this, not setting the id correctly? 

          Michael Neale added a comment - rtyler so far this looks like jobs created by blue ocean (before the fix) and jobDSLs without an id set get this, but as for the github organisation folder - are you seeing it there ? (it may have a related issue)   stephenconnolly do you know if github organisation folders could be bitten by this, not setting the id correctly? 

          R. Tyler Croy added a comment -

          I could have sworn I saw this with the GitHub Organization Folders on ci.jenkins.io as well as from my Code Valet instances, but I cannot find any record of it.

          I might just be seeing ghosts in the machine, disregard

          R. Tyler Croy added a comment - I could have sworn I saw this with the GitHub Organization Folders on ci.jenkins.io as well as from my Code Valet instances, but I cannot find any record of it. I might just be seeing ghosts in the machine, disregard

          Michael Neale added a comment -

          ok good to know - I might close this now given the 'id' solution and the blue ocean fix. there is a linked follow on ticket to make the api (specifically for jobDSL) clearer here. Feel free to reopen if new information. 

          rtyler ok - well that fits with the theory, so that is good. 

          Michael Neale added a comment - ok good to know - I might close this now given the 'id' solution and the blue ocean fix. there is a linked follow on ticket to make the api (specifically for jobDSL) clearer here. Feel free to reopen if new information.  rtyler ok - well that fits with the theory, so that is good. 

            stephenconnolly Stephen Connolly
            rtyler R. Tyler Croy
            Votes:
            6 Vote for this issue
            Watchers:
            18 Start watching this issue

              Created:
              Updated:
              Resolved: