Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-41929

Offer "Build with Parameters" on first build when declarative Jenkinsfile found

      By default a branch project will automatically run the first build, with no parameters, so params will just pick up any default values. You have the option to suppress the automatic first build, but this does not give you any way to enter parameters for it (at least in the UI; perhaps possible via CLI/REST), since Jenkins does not know what the parameters are going to be until it starts running. But in the case of Declarative we could in principle inspect the Jenkinsfile when the branch project is created (via SCMFileSystem) and determine the parameter definitions by static parsing without actually running.

      More generally, if Declarative is in use and there are properties, we could set all the project properties when the branch project is created, even if the first build is run automatically. (Though I would suggest that the automatic first build should be automatically suppressed if there is a ParametersDefinitionProperty.)

          [JENKINS-41929] Offer "Build with Parameters" on first build when declarative Jenkinsfile found

          Michael Neale added a comment -

          ssbarnea glad you brought it up - this is really annoying yes. 

          I think declarative has the best chance to solve this.. maybe. Would like to talk to abayer about this. 

          Keep thinking up any other ideas... 

          Michael Neale added a comment - ssbarnea glad you brought it up - this is really annoying yes.  I think declarative has the best chance to solve this.. maybe. Would like to talk to abayer about this.  Keep thinking up any other ideas... 

          Andrew Bayer added a comment -

          So I've thought on this a fair amount and haven't been comfortable enough with any of the Declarative possibilities to pursue one yet...

          Andrew Bayer added a comment - So I've thought on this a fair amount and haven't been comfortable enough with any of the Declarative possibilities to pursue one yet...

          Murad Korejo added a comment -

          How about a special "Refresh Parameters" option for pipeline projects? It won't run the full job but "processes" the pipeline script by downloading latest from source and updates things in the job config such that new/changed params are reflected.

          Murad Korejo added a comment - How about a special "Refresh Parameters" option for pipeline projects? It won't run the full job but "processes" the pipeline script by downloading latest from source and updates things in the job config such that new/changed params are reflected.

          Michael Neale added a comment -

          mkorejo the problem with scripted pipeline is there is no way to evaluate the code to set the properties without running the whole thing, as it is programmatic. Would need to way to identify specific calls to set parameters and hope they are not based on dynamic variables etc. 

          Michael Neale added a comment - mkorejo the problem with scripted pipeline is there is no way to evaluate the code to set the properties without running the whole thing, as it is programmatic. Would need to way to identify specific calls to set parameters and hope they are not based on dynamic variables etc. 

          Sorin Sbarnea added a comment -

          The only dynamic part I seen and used so far for parameters was to set the default value to a value based on an environment variable. This is a common practice for allowing user to specify an override value on a specific run but to control the default value at master level.

          When you have 1000+ jobs, you don't want to update all of them because you decided to change the default value of one variable.

          Other than this, I don't think they are dynamic, and TBH it would not make sense because you reach the chicken and the egg problem: what comes first the first job execution or the parameter processing?

           

          Sorin Sbarnea added a comment - The only dynamic part I seen and used so far for parameters was to set the default value to a value based on an environment variable. This is a common practice for allowing user to specify an override value on a specific run but to control the default value at master level. When you have 1000+ jobs, you don't want to update all of them because you decided to change the default value of one variable. Other than this, I don't think they are dynamic, and TBH it would not make sense because you reach the chicken and the egg problem: what comes first the first job execution or the parameter processing?  

          Ivan von Nagy added a comment -

          Any idea on when this might be picked up as it is a major hindrance from us using pipelines. All PRs will fail an automated test since the PR will result in a new branch and the pipeline will then fail as the parameters will not be set.

           

          BTW, anyone have a good workaround for now? For example, check for null parameters in pipeline and trigger a complete restart of the pipeline so that the parameters are picked up.

          Ivan von Nagy added a comment - Any idea on when this might be picked up as it is a major hindrance from us using pipelines. All PRs will fail an automated test since the PR will result in a new branch and the pipeline will then fail as the parameters will not be set.   BTW, anyone have a good workaround for now? For example, check for null parameters in pipeline and trigger a complete restart of the pipeline so that the parameters are picked up.

          Michael Neale added a comment -

          vonnagy_loggly still have no clue how to implement this, short of changing a fair bit about how pipeline works, given that the parameters are defined as a script. 

          Would be interested in talking other ideas, perhaps the params could be setup as part of a creation wizard/gui (before the pipeline exists) but that won't work for existing Jenkinsfiles... 

          Michael Neale added a comment - vonnagy_loggly still have no clue how to implement this, short of changing a fair bit about how pipeline works, given that the parameters are defined as a script.  Would be interested in talking other ideas, perhaps the params could be setup as part of a creation wizard/gui (before the pipeline exists) but that won't work for existing Jenkinsfiles... 

          Carlos Tadeu Panato added a comment - - edited

          What I did to make a workaround is: we have a job that triggers our multibranch and in this trigger, we start the job and after that, we check if that job is the first or not if it is the first time we kill that and retrigger with same parameters.

          I did using the post groovy script

          Carlos Tadeu Panato added a comment - - edited What I did to make a workaround is: we have a job that triggers our multibranch and in this trigger, we start the job and after that, we check if that job is the first or not if it is the first time we kill that and retrigger with same parameters. I did using the post groovy script

          Ivan von Nagy added a comment -

          For better or worse, I added this logic to check for a param at the beginning and fail fast while kicking off a new build on the branch.

           

          // this stage checks for null parameters which usually occur when a new branch is discovered. See the following
          // for more details: https://issues.jenkins-ci.org/browse/JENKINS-41929
          stage('Validate parameters') {
            when {
              expression {
                // Only run this stage if the BUILD_IMAGE is invalid
                return !(env.BUILD_IMAGE)
              }
            }
            steps {
              withCredentials([string(credentialsId:'jenkins-build', variable:'TOKEN')]) {
                  sh '''
                      set +x
                      RETRY_BRANCH=$(python -c 'import urllib, sys; print urllib.quote(sys.argv[1], sys.argv[2])' "${BRANCH_NAME}" "")
                      curl --user "service@foo.com:$TOKEN" -X POST -H "Content-Type: application/json" "http://localhost:8080/blue/rest/organizations/jenkins/pipelines/My%20Pipeline(s)/branches/${RETRY_BRANCH}/runs/${BUILD_ID}/replay"
                  '''
              }
          
              // Abort the build, skipping subsequent stages
              error("Aborting build since parameters are invalid")
            }
          }
          

          Ivan von Nagy added a comment - For better or worse, I added this logic to check for a param at the beginning and fail fast while kicking off a new build on the branch.   // this stage checks for null parameters which usually occur when a new branch is discovered. See the following // for more details: https://issues.jenkins-ci.org/browse/JENKINS-41929 stage( 'Validate parameters' ) { when { expression { // Only run this stage if the BUILD_IMAGE is invalid return !(env.BUILD_IMAGE) } } steps { withCredentials([string(credentialsId: 'jenkins-build' , variable: 'TOKEN' )]) { sh ''' set +x RETRY_BRANCH=$(python -c ' import urllib, sys; print urllib.quote(sys.argv[1], sys.argv[2])' "${BRANCH_NAME}" "") curl --user "service@foo.com:$TOKEN" -X POST -H "Content-Type: application/json" "http: //localhost:8080/blue/ rest /organizations/jenkins/pipelines/My%20Pipeline(s)/branches/${RETRY_BRANCH}/runs/${BUILD_ID}/replay" ''' } // Abort the build, skipping subsequent stages error( "Aborting build since parameters are invalid" ) } }

          In JENKINS 2.107.2 (April 2018) there are some minor problems with parameters.
          In particular - parameters sometimes are not recognized from pipeline definitions. I need to start build couple of times as-is to make parameters recognized.
          Also when I added new a "choice" parameter to existing parameters it was not recognized at all. So, I have to add it in UI (separately from pipeline definitions). And pipeline definitions were not sync from UI parameters, so such parameter was defined in UI for pipeline but it was not added to pipeline script. Thus, "pipeline script" and "pipeline parameters" in UI editor can be very different.

          Also a bad thing - it does not provide a possibility to specify captions for choice-values!
          For example I would like to use INI-file format for such purposes, so like this - choise: ['R2010B=Release 2010.2\nR2015=Release 9.0\nR2016=Release 9.1\nR2017=Release 2017.1'] - such format will be much more useful for end-user. So, when calling a build script it should use choice value "R2017" and the "Release 2017.1" text is only to show in UI for user when choosing a parameter value for a build.

          Dmitry Bondarenko added a comment - In JENKINS 2.107.2 (April 2018) there are some minor problems with parameters. In particular - parameters sometimes are not recognized from pipeline definitions. I need to start build couple of times as-is to make parameters recognized. Also when I added new a "choice" parameter to existing parameters it was not recognized at all. So, I have to add it in UI (separately from pipeline definitions). And pipeline definitions were not sync from UI parameters, so such parameter was defined in UI for pipeline but it was not added to pipeline script. Thus, "pipeline script" and "pipeline parameters" in UI editor can be very different. Also a bad thing - it does not provide a possibility to specify captions for choice-values! For example I would like to use INI-file format for such purposes, so like this - choise: ['R2010B=Release 2010.2\nR2015=Release 9.0\nR2016=Release 9.1\nR2017=Release 2017.1'] - such format will be much more useful for end-user. So, when calling a build script it should use choice value "R2017" and the "Release 2017.1" text is only to show in UI for user when choosing a parameter value for a build.

          Kiruahxh added a comment -

          @Mention mkorejo

          How about a special "Refresh Parameters" option for pipeline projects? It won't run the full job but "processes" the pipeline script by downloading latest from source and updates things in the job config such that new/changed params are reflected.

          I agree, a button or an option "Refresh parameters" would be ok for scripted pipelines.
          The job could have a refreshParameter variable in input, and its status should be discarded.

          Another possibility would be to split the Jenkinsfile in two : one file for job properties and parameters, and another for the execution script. Ex : Jenkinsproperties ans Jenkinsfile

          As a workaround, I put a "refresh parameter" option in my jobs.

          Kiruahxh added a comment - @Mention mkorejo How about a special "Refresh Parameters" option for pipeline projects? It won't run the full job but "processes" the pipeline script by downloading latest from source and updates things in the job config such that new/changed params are reflected. I agree, a button or an option "Refresh parameters" would be ok for scripted pipelines. The job could have a refreshParameter variable in input, and its status should be discarded. Another possibility would be to split the Jenkinsfile in two : one file for job properties and parameters, and another for the execution script. Ex : Jenkinsproperties ans Jenkinsfile As a workaround, I put a "refresh parameter" option in my jobs.

          John Morton added a comment -

          I would highly suggest implementing this feature. This makes using parameterized Declarative Pipelines a hassle for us and either has us using non-ideal workarounds.

          There should just be a button for all pipeline jobs that says "re-pull from SCM". This should pull the Jenkinsfile from SCM and changes it from a default unparameterized build to a parameterized build but does not actually run a "build".

          John Morton added a comment - I would highly suggest implementing this feature. This makes using parameterized Declarative Pipelines a hassle for us and either has us using non-ideal workarounds. There should just be a button for all pipeline jobs that says "re-pull from SCM". This should pull the Jenkinsfile from SCM and changes it from a default unparameterized build to a parameterized build but does not actually run a "build".

          Daniel Moore added a comment - - edited

          I am not familiar with the Jenkinsfile API, but as a workaround for declarative pipelines, would it work to create a decorator param-pipeline method and use it instead of the pipeline method? It could take the same arguments/script blocks/whatever as pipeline and look at the defined parameters and try to update the job accordingly. If the job already had the right parameter setup than it would just pass its arguments to the pipeline method. You would still have to run it twice, so not ideal.

          Daniel Moore added a comment - - edited I am not familiar with the Jenkinsfile API, but as a workaround for declarative pipelines, would it work to create a decorator param-pipeline method and use it instead of the pipeline method? It could take the same arguments/script blocks/whatever as  pipeline and look at the defined parameters and try to update the job accordingly. If the job already had the right parameter setup than it would just pass its arguments to the  pipeline method. You would still have to run it twice, so not ideal.

          Kiruahxh added a comment -

          Also, when using the "properties" step, either to declare parameters or set other options, the job options are still editable.
          It is flexible but very confusing : my colleagues could edit the jobs parameters without knowing that they are taken from the Jenkinsfile, then run the job and loose their work.

          Kiruahxh added a comment - Also, when using the "properties" step, either to declare parameters or set other options, the job options are still editable. It is flexible but very confusing : my colleagues could edit the jobs parameters without knowing that they are taken from the Jenkinsfile, then run the job and loose their work.

          Hi all,

          More in general the parameters of the "previous" run are used.

          We have seen this when playing around with some kind of "dynamic value" for a parameter.
          We have a single (declarative) Pipeline which has a parameter to enable a DEBUG build.
          During the continuous CI builds, DEBUG mode is enabled, i.e. the `BUILD_DEBUG` parameter to false.
          On a nightly build, we set the `BUILD_DEBUG` parameter to false.

          pipeline {
          ...
              parameters {
                  booleanParam(
                      name: 'BUILD_DEBUG',
                      defaultValue: need_debug_build(currentBuild)
                  )
              }
          ...
          }
          

          We now see that:

          • the "nightly build" correctly sets the parameter to false
          • and the "daily builds" correctly set it to true.
          • but the parameter value is only applied in the next build !

          For example:

          • The "nightly build" still uses `DEBUG_BUILD=true`
          • While the first "daily build" uses `DEBUG_BUILD=false`

          I.e. When creating a new branch this applies too:
          (FYI: We use the Multibranch pipeline plugin on subversion repositories)
          The first build has no "previous build"and thus "no parameters"
          Which exactly explains what is seen on the first build of a (declarative) pipeline.

          In my opinion the `parameters` must be defined and applied on the current build.

          I hope this information is of any use for you!

          We will look for a workaround for now, but it would be great to see this fixed.
          Thank you in advance for the effort!

          With best regards,
          Tom.

          Tom Ghyselinck added a comment - Hi all, More in general the parameters of the " previous " run are used. We have seen this when playing around with some kind of " dynamic value " for a parameter. We have a single (declarative) Pipeline which has a parameter to enable a DEBUG build. During the continuous CI builds, DEBUG mode is enabled, i.e. the `BUILD_DEBUG` parameter to false. On a nightly build, we set the `BUILD_DEBUG` parameter to false. pipeline { ... parameters { booleanParam( name: 'BUILD_DEBUG', defaultValue: need_debug_build(currentBuild) ) } ... } We now see that: the " nightly build " correctly sets the parameter to false and the " daily builds " correctly set it to true. but the parameter value is only applied in the next build ! For example: The " nightly build " still uses `DEBUG_BUILD=true` While the first " daily build " uses `DEBUG_BUILD=false` I.e. When creating a new branch this applies too: ( FYI: We use the Multibranch pipeline plugin on subversion repositories ) The first build has no " previous build "and thus " no parameters " Which exactly explains what is seen on the first build of a ( declarative ) pipeline. In my opinion the `parameters` must be defined and applied on the current build. I hope this information is of any use for you! We will look for a workaround for now, but it would be great to see this fixed. Thank you in advance for the effort! With best regards, Tom.

          Jesse Glick added a comment -

          the pipeline will then fail as the parameters will not be set

          No, you just need to use ${params.NAME} rather than ${NAME}. The latter loads a variable defined when the build started. The former will pick up current parameter definitions at the time the expression is used, which may be after properties has (re-)defined the job’s parameters list. For builds triggered by a PR event, this is fine, as there are no parameters coming from the environment—you are getting defaults.

          Jesse Glick added a comment - the pipeline will then fail as the parameters will not be set No, you just need to use ${params.NAME } rather than ${NAME }. The latter loads a variable defined when the build started. The former will pick up current parameter definitions at the time the expression is used, which may be after properties has (re-)defined the job’s parameters list. For builds triggered by a PR event, this is fine, as there are no parameters coming from the environment—you are getting defaults.

          Adity Sreekumar added a comment - - edited

          When was the ${params.Name} syntax introduced? It does not seem to detect that when I use it in the following context:

          parameters{
           string (
           defaultValue: '1.0',
           description: 'Toolchain version',
           name : 'TOOLCHAIN')
          }
          steps{   
          checkout ($class: 'GitSCM', branches = [[name: '${params.TOOLCHAIN}']]])
          }
           
          

           If I use just TOOLCHAIN, it gives me the error reported earlier where it cannot find the environment variable.

          Adity Sreekumar added a comment - - edited When was the ${params.Name}  syntax introduced? It does not seem to detect that when I use it in the following context: parameters{ string ( defaultValue: '1.0' , description: 'Toolchain version' , name : 'TOOLCHAIN' ) } steps{    checkout ($class: 'GitSCM' , branches = [[name: '${params.TOOLCHAIN}' ]]]) }    If I use just TOOLCHAIN, it gives me the error reported earlier where it cannot find the environment variable.

          Stefan Verhoeff added a comment - - edited

          asreekumar I think the issue in your code are the single quotes. Use double quotes instead to make variable interpolation work.

           

          [[name: "${params.TOOLCHAIN}"]]]
          

          Stefan Verhoeff added a comment - - edited asreekumar I think the issue in your code are the single quotes. Use double quotes instead to make variable interpolation work.   [[name: "${params.TOOLCHAIN}"]]]

          That worked, thanks.

          Adity Sreekumar added a comment - That worked, thanks.

          Igor Pashev added a comment - - edited

          There is a similar issue with triggers.

          I do not know how this works, but with declarative syntax the fix could be as simple as parsing the pipeline definition and picking up triggers and parameters.

          After all this works with good old XML configs that are perfectly declarative too.

          Igor Pashev added a comment - - edited There is a similar issue with triggers. I do not know how this works, but with declarative syntax the fix could be as simple as parsing the pipeline definition and picking up triggers and parameters. After all this works with good old XML configs that are perfectly declarative too.

          Igor Pashev added a comment -

          At least I'd like it fixed for inline piplines (not in SCM repositories). Pipelines offer some useful feature like parallel steps, multiple SCM, etc. which are not available in old XML configs. This problem with triggers and parameters is definitely a regression.

          Igor Pashev added a comment - At least I'd like it fixed for inline piplines (not in SCM repositories). Pipelines offer some useful feature like parallel steps, multiple SCM, etc. which are not available in old XML configs. This problem with triggers and parameters is definitely a regression.

          Jesse Glick added a comment -

          I'd like it fixed for inline piplines (not in SCM repositories).

          Just define the trigger, parameters, or other job properties directly on the job, as you would have for freestyle. This works for both inline scripts and scripts from SCM. You only need to use the properties step and thus encounter this issue when you are using multibranch projects (a.k.a. “Pipeline-as-Code”).

          Jesse Glick added a comment - I'd like it fixed for inline piplines (not in SCM repositories). Just define the trigger, parameters, or other job properties directly on the job, as you would have for freestyle. This works for both inline scripts and scripts from SCM. You only need to use the properties step and thus encounter this issue when you are using multibranch projects (a.k.a. “Pipeline-as-Code”).

          Igor Pashev added a comment -

          > Just define the trigger, parameters, or other job properties directly on the job, as you would have for freestyle. This works for both inline scripts and scripts from SCM

          Looks like it really works!

           

          Igor Pashev added a comment - > Just define the trigger, parameters, or other job properties directly on the job, as you would have for freestyle. This works for both inline scripts and scripts from SCM Looks like it really works!  

          Patrick Ruhkopf added a comment - - edited

          There seems to be a more critical issue when using parameterized, declarative pipelines within shared libraries and multi-branch projects. When there were modifications and a change is pushed from SCM, then the build fails immediately with the following exception:

          java.lang.IllegalArgumentException: Null value not allowed as an environment variable: APPLICATION
           at hudson.EnvVars.put(EnvVars.java:359)
           at hudson.model.StringParameterValue.buildEnvironment(StringParameterValue.java:59)
           at hudson.model.ParametersAction.buildEnvironment(ParametersAction.java:145)
           at hudson.model.Run.getEnvironment(Run.java:2365)
           at org.jenkinsci.plugins.workflow.job.WorkflowRun.getEnvironment(WorkflowRun.java:513)
           at org.jenkinsci.plugins.workflow.cps.CpsScmFlowDefinition.create(CpsScmFlowDefinition.java:106)
           at org.jenkinsci.plugins.workflow.multibranch.SCMBinder.create(SCMBinder.java:120)
           at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:303) 

          When the pipeline is in this state, the only fix is to manually trigger it with "Run with parameters". I can't use any of the workarounds suggested here, because even when I update the Jenkinsfile to not call the shared pipeline and just print a single "echo hello world", it still fails right away. Any suggestions?

          Does this belong here in this issue or should I open a new one?

          Patrick Ruhkopf added a comment - - edited There seems to be a more critical issue when using parameterized, declarative pipelines within shared libraries and multi-branch projects. When there were modifications and a change is pushed from SCM, then the build fails immediately with the following exception: java.lang.IllegalArgumentException: Null value not allowed as an environment variable: APPLICATION at hudson.EnvVars.put(EnvVars.java:359) at hudson.model.StringParameterValue.buildEnvironment(StringParameterValue.java:59) at hudson.model.ParametersAction.buildEnvironment(ParametersAction.java:145) at hudson.model.Run.getEnvironment(Run.java:2365) at org.jenkinsci.plugins.workflow.job.WorkflowRun.getEnvironment(WorkflowRun.java:513) at org.jenkinsci.plugins.workflow.cps.CpsScmFlowDefinition.create(CpsScmFlowDefinition.java:106) at org.jenkinsci.plugins.workflow.multibranch.SCMBinder.create(SCMBinder.java:120) at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:303) When the pipeline is in this state, the only fix is to manually trigger it with "Run with parameters". I can't use any of the workarounds suggested here, because even when I update the Jenkinsfile to not call the shared pipeline and just print a single "echo hello world", it still fails right away. Any suggestions? Does this belong here in this issue or should I open a new one?

          Andrew Bayer added a comment -

          ruhkopf - open a new ticket and make sure to include your full reproduction case. I assume you don't have a default value set for the parameter in question?

          Andrew Bayer added a comment - ruhkopf - open a new ticket and make sure to include your full reproduction case. I assume you don't have a default value set for the parameter in question?

          Falko Modler added a comment -

          I am also affected by this:

          • Jenkinsfile from SCM
          • string parameter with defaultValue
          • value is accessed via ${params.[...]}
          • changed defaultValue is only picked up in the second build, not right away

          Falko Modler added a comment - I am also affected by this: Jenkinsfile from SCM string parameter with defaultValue value is accessed via ${params. [...] } changed defaultValue is only picked up in the second build, not right away

          Ivan Fernandez Calvo added a comment - - edited

          The workaround seems not valid on the latest version of declarative pipeline, even do you use params to declare the environment variable the variable is not defined

          • Jenkins core 2.153
          • Declarative Pipeline 1.3.3

          Steps to replicate the issue:

          • Create a Jenkinsfile like the following in a repo
          • Create a multibranch pipeline that uses this repo and this Jenkinsfile
          • Create a PR
          • Check the logs of the pipeline and you could see the issue, the variable is not defined on parallel stages
          #!/usr/bin/env groovy
          
          pipeline {
            agent none
            options {
              timeout(time: 1, unit: 'HOURS')
              buildDiscarder(logRotator(numToKeepStr: '20', artifactNumToKeepStr: '20', daysToKeepStr: '30'))
              timestamps()
              ansiColor('xterm')
              disableResume()
              durabilityHint('PERFORMANCE_OPTIMIZED')
            }
            parameters {
              string(name: 'GO_VERSION', defaultValue: "1.10.3", description: "Go version to use.")
            }
            stages {
              stage('Initializing'){
                agent { label 'linux && immutable' }
                options { skipDefaultCheckout() }
                environment {
                  GO_VERSION = "${params.GO_VERSION}"
                }
                stages {
                  stage('It works') {
                    steps {
                      sh "echo '${GO_VERSION}'"
                    }
                  }
                }
                stage('Test') {
                  failFast true
                  parallel {
                    stage('Fail 01') {
                      steps {
                        sh "echo '${GO_VERSION}'"
                      }
                    }
                    stage('Fail 02') {
                      steps {
                        sh "echo '${GO_VERSION}'"
                      }
                    }
                  }
              }
            }
          }
          

          This works

          pipeline {
            agent none
            options {
              timeout(time: 1, unit: 'HOURS')
              buildDiscarder(logRotator(numToKeepStr: '20', artifactNumToKeepStr: '20', daysToKeepStr: '30'))
              timestamps()
              ansiColor('xterm')
              disableResume()
              durabilityHint('PERFORMANCE_OPTIMIZED')
            }
            parameters {
              string(name: 'GO_VERSION', defaultValue: "1.10.3", description: "Go version to use.")
            }
            stages {
              stage('Initializing'){
                agent { label 'linux && immutable' }
                options { skipDefaultCheckout() }
                environment {
                  GO_VERSION = "${params.GO_VERSION}"
                }
                stages {
                  stage('It works') {
                    steps {
                      sh "echo '${GO_VERSION}'"
                    }
                  }
                }
                stage('Test') {
                  failFast true
                  parallel {
                    stage('Fail 01') {
                environment {
                  GO_VERSION = "${params.GO_VERSION}"
                }
                      steps {
                        sh "echo '${GO_VERSION}'"
                      }
                    }
                    stage('Fail 02') {
                environment {
                  GO_VERSION = "${params.GO_VERSION}"
                }
                      steps {
                        sh "echo '${GO_VERSION}'"
                      }
                    }
                  }
              }
            }
          }
          

          Ivan Fernandez Calvo added a comment - - edited The workaround seems not valid on the latest version of declarative pipeline, even do you use params to declare the environment variable the variable is not defined Jenkins core 2.153 Declarative Pipeline 1.3.3 Steps to replicate the issue: Create a Jenkinsfile like the following in a repo Create a multibranch pipeline that uses this repo and this Jenkinsfile Create a PR Check the logs of the pipeline and you could see the issue, the variable is not defined on parallel stages #!/usr/bin/env groovy pipeline { agent none options { timeout(time: 1, unit: 'HOURS' ) buildDiscarder(logRotator(numToKeepStr: '20' , artifactNumToKeepStr: '20' , daysToKeepStr: '30' )) timestamps() ansiColor( 'xterm' ) disableResume() durabilityHint( 'PERFORMANCE_OPTIMIZED' ) } parameters { string(name: 'GO_VERSION' , defaultValue: "1.10.3" , description: "Go version to use." ) } stages { stage( 'Initializing' ){ agent { label 'linux && immutable' } options { skipDefaultCheckout() } environment { GO_VERSION = "${params.GO_VERSION}" } stages { stage( 'It works' ) { steps { sh "echo '${GO_VERSION}' " } } } stage( 'Test' ) { failFast true parallel { stage( 'Fail 01' ) { steps { sh "echo '${GO_VERSION}' " } } stage( 'Fail 02' ) { steps { sh "echo '${GO_VERSION}' " } } } } } } This works pipeline { agent none options { timeout(time: 1, unit: 'HOURS' ) buildDiscarder(logRotator(numToKeepStr: '20' , artifactNumToKeepStr: '20' , daysToKeepStr: '30' )) timestamps() ansiColor( 'xterm' ) disableResume() durabilityHint( 'PERFORMANCE_OPTIMIZED' ) } parameters { string(name: 'GO_VERSION' , defaultValue: "1.10.3" , description: "Go version to use." ) } stages { stage( 'Initializing' ){ agent { label 'linux && immutable' } options { skipDefaultCheckout() } environment { GO_VERSION = "${params.GO_VERSION}" } stages { stage( 'It works' ) { steps { sh "echo '${GO_VERSION}' " } } } stage( 'Test' ) { failFast true parallel { stage( 'Fail 01' ) { environment { GO_VERSION = "${params.GO_VERSION}" } steps { sh "echo '${GO_VERSION}' " } } stage( 'Fail 02' ) { environment { GO_VERSION = "${params.GO_VERSION}" } steps { sh "echo '${GO_VERSION}' " } } } } } }

          Has there been any traction on this?  Honestly if someone could point me at the code responsible for reading the Jenkinsfile from the scm I could write a simple plugin that just adds a "Refresh Job Definition" button to the sidebar of a job.

          Most of the folks I see run into this are less concerned with the first-time triggered build than they are about being unable to modify new parameters after updating a particular job which means they're manually executing a Pipeline.  

          steve bussetti added a comment - Has there been any traction on this?  Honestly if someone could point me at the code responsible for reading the Jenkinsfile from the scm I could write a simple plugin that just adds a "Refresh Job Definition" button to the sidebar of a job. Most of the folks I see run into this are less concerned with the first-time triggered build than they are about being unable to modify new parameters after updating a particular job which means they're manually executing a Pipeline.  

          Hideaki Kawashima added a comment - - edited

          Is this bug known to the person in charge ?
          Are you already working on countermeasures ?

          The specification that can specify the default value in parameters block is broken. Various provisional workarounds are listed, but they will fail immediately.
          As mentioned above, re-definition in the environment block is not possible in parallel execution. Script compilation fails if the number of parameters defined in the parameters block is large or the script is slightly larger. Expecting a first build with default settings defined in parameters block for post-commit processing, but we have to go to the build manually after failed. (This breaks automate execution for build pipeline)

          As you know, there are many different builds in the CI environment for post-commit processing. Rerunning them one by one manually will greatly impair the significance of the existence of a CI environment that is designed to reduce time and effort.

          I'd like to see clear countermeasures or workarounds soon.
          Thank you.

          Hideaki Kawashima added a comment - - edited Is this bug known to the person in charge ? Are you already working on countermeasures ? The specification that can specify the default value in parameters block is broken. Various provisional workarounds are listed, but they will fail immediately. As mentioned above, re-definition in the environment block is not possible in parallel execution. Script compilation fails if the number of parameters defined in the parameters block is large or the script is slightly larger. Expecting a first build with default settings defined in parameters block for post-commit processing, but we have to go to the build manually after failed. (This breaks automate execution for build pipeline) As you know, there are many different builds in the CI environment for post-commit processing. Rerunning them one by one manually will greatly impair the significance of the existence of a CI environment that is designed to reduce time and effort. I'd like to see clear countermeasures or workarounds soon. Thank you.

          [...] adds a "Refresh Job Definition" button to the sidebar of a job.

          That doesn't sound like a bad idea...

          Marcello Romani added a comment - [...] adds a "Refresh Job Definition" button to the sidebar of a job. That doesn't sound like a bad idea...

          Here my workaround:

          I've added pipeline options

          skipDefaultCheckout true
          

          and in the first stage the following script

          script {
                              // On first build by user just load the parameters as they are not available of first run on new branches
                              if (env.BUILD_NUMBER.equals("1") && currentBuild.getBuildCauses('hudson.model.Cause$UserIdCause') != null) {
                                  currentBuild.displayName = 'Parameter loading'
                                  addBuildDescription('Please restart pipeline')
                                  currentBuild.result = 'ABORTED'
                                  error('Stopping initial manually triggered build as we only want to get the parameters')
                              }
          }
          

          So first time pressing the button does not lead to a build but a parameter loading.
          Perhaps I can find a solution, where it is triggered automatically on branch creation by scm. Any ideas?

          Michael Brunner added a comment - Here my workaround: I've added pipeline options skipDefaultCheckout true and in the first stage the following script script { // On first build by user just load the parameters as they are not available of first run on new branches if (env.BUILD_NUMBER.equals( "1" ) && currentBuild.getBuildCauses( 'hudson.model.Cause$UserIdCause' ) != null ) { currentBuild.displayName = 'Parameter loading' addBuildDescription( 'Please restart pipeline' ) currentBuild.result = 'ABORTED' error( 'Stopping initial manually triggered build as we only want to get the parameters' ) } } So first time pressing the button does not lead to a build but a parameter loading. Perhaps I can find a solution, where it is triggered automatically on branch creation by scm. Any ideas?

          Allan Lewis added a comment -

          Thanks, brunni - I've considered doing something similar, except maybe making it SCM-triggered since my pipeline is designed to only be manually triggered.

          Allan Lewis added a comment - Thanks, brunni - I've considered doing something similar, except maybe making it SCM-triggered since my pipeline is designed to only be manually triggered.

          Paul Frischknecht added a comment - - edited

          I would like to bring this issue to your attention again. We have many pseudo-failing builds on our Jenkins (we are hundreds of developers...) because of this, which renders the "Weather Icon" build state monitoring tool useless. Also, DevOps are confused when they first see "Build now" when they expect "Build with Parameters" which they see only after that first build failed...

          abayer, jglick

          Paul Frischknecht added a comment - - edited I would like to bring this issue to your attention again. We have many pseudo-failing builds on our Jenkins (we are hundreds of developers...) because of this, which renders the "Weather Icon" build state monitoring tool useless. Also, DevOps are confused when they first see "Build now" when they expect "Build with Parameters" which they see only after that first build failed... abayer , jglick

          I believe this issue extends into a more serious issue, and I haven't found a pattern that works with Jenkins yet... This issue applies to "Multibranch" pipeline jobs, but also to simple "Pipeline" jobs.

          Imagine a simple publish job that is used by many different repositories. This job is called from other jobs. In order to provide the most reproducible solution, the other job must provide the commit id of the generic jenkinsfile to use (e.g.: $COMMIT = v2.4). This is then used in the "Clone" properties of the job config (through the UI) as "Branch to checkout: $COMMIT".

          This works well: Jenkins receives $COMMIT, and uses that to checkout "a jenkinsfile from the past" and process it.

          This works well, as long as you don't add/remove properties

          If I released a new v2.5 with a new property, I cannot support the v2.4 properties because then, when other jobs calls me with 2.4 or before, the new property will be deleted. This affects both the "Build with Parameter" method, AND through `build job:  job_name` from another jenkinsfile.

           

          I can think of 2 workarounds:

          1. Create a new job with a new name each time the properties change
          2. Leverage the multibranch pipeline so that it discovers a special branch prefix like `jenkins/` that you use to publish new versions of the jenkinsfile

           

          None of my workaround fix the "first time run" problem, but the real problem here, is that you can't call 100% reproducible builds is jenkins has to rely on some artifact from the previous build in order to work. This is probably the root of all evils...?

           

          Is there a more general issue that I should subscribe to in order to monitor the progress on the chicken & egg problem?

          Jonathan Piché added a comment - I believe this issue extends into a more serious issue, and I haven't found a pattern that works with Jenkins yet... This issue applies to "Multibranch" pipeline jobs, but also to simple "Pipeline" jobs. Imagine a simple publish job that is used by many different repositories. This job is called from other jobs. In order to provide the most reproducible solution, the other job must provide the commit id of the generic jenkinsfile to use (e.g.: $COMMIT = v2.4). This is then used in the "Clone" properties of the job config (through the UI) as "Branch to checkout: $COMMIT". This works well: Jenkins receives $COMMIT, and uses that to checkout "a jenkinsfile from the past" and process it. This works well, as long as you don't add/remove properties If I released a new v2.5 with a new property, I cannot support the v2.4 properties because then, when other jobs calls me with 2.4 or before, the new property will be deleted. This affects both the "Build with Parameter" method, AND through `build job:  job_name` from another jenkinsfile.   I can think of 2 workarounds: Create a new job with a new name each time the properties change Leverage the multibranch pipeline so that it discovers a special branch prefix like `jenkins/` that you use to publish new versions of the jenkinsfile   None of my workaround fix the "first time run" problem, but the real problem here, is that you can't call 100% reproducible builds is jenkins has to rely on some artifact from the previous build in order to work. This is probably the root of all evils...?   Is there a more general issue that I should subscribe to in order to monitor the progress on the chicken & egg problem?

          Jesse Glick added a comment -

          There is no more general issue, it is just inherent to using a properties step as part of the build itself, as opposed to some sort of external job (re)definition such as the job-dsl plugin.

          Jesse Glick added a comment - There is no more general issue, it is just inherent to using a properties step as part of the build itself, as opposed to some sort of external job (re)definition such as the job-dsl plugin.

          Felipe Santos added a comment - - edited

          I'm trying to apply this workaround but it does not work:

          Scripts not permitted to use staticMethod org.codehaus.groovy.runtime.DefaultGroovyMethods putAt java.lang.Object java.lang.String java.lang.Object

          Any other simple workaround?
           

          Felipe Santos added a comment - - edited I'm trying to apply this workaround  but it does not work: Scripts not permitted to use staticMethod org.codehaus.groovy.runtime.DefaultGroovyMethods putAt java.lang.Object java.lang.String java.lang.Object Any other simple workaround?  

          Felipe Santos added a comment - - edited

          I ended up with:

          stage('Preparations') {
            steps {
              echo 'Initialize parameters as environment variables due to https://issues.jenkins-ci.org/browse/JENKINS-41929'
              evaluate """${def script = ""; params.each { k, v -> script += "env.${k} = '''${v}'''\n" }; return script}"""
            }
          }
          

          Felipe Santos added a comment - - edited I ended up with: stage( 'Preparations' ) {   steps {     echo  'Initialize parameters  as  environment variables due to https: //issues.jenkins-ci.org/browse/JENKINS-41929'     evaluate """${ def script = " "; params.each { k, v -> script += " env.${k} = '''${v}' ''\n " }; return script}" ""   } }

          Ivan Fernandez Calvo added a comment - - edited

          We have other workaround for those builds do not have default values on all parameters, and they need values, we mark the build as not build at when level and we skip everything, to do that we check that the environment variable and the parameter has the same value, we also check that the mandatory parameters has value.

          stage('Prepare') {
            when {
              expression {
                def ret = (
                  "${env.PARAM01}" == "${params.PARAM01}"
                  && "${params.PARAM01}" != ""
                  && "${params.PARAM02}" != ""
                  && "${params.PARAM03}" != ""
                  )
                if(!ret){
                  currentBuild.result = 'NOT_BUILT'
                  currentBuild.description = "The build has been skipped"
                  currentBuild.displayName = "#${BUILD_NUMBER}-(Skipped)"
                  echo("This build does not have valid parameters.")
                }
                return ret
              }
            }
            stages {
              stage('checkout'){
                ....
              }
              stage('lint'){
                ....
              }
              stage('build'){
                ....
              }
              stage('test'){
                ....
              }
            }
          }
          

          Ivan Fernandez Calvo added a comment - - edited We have other workaround for those builds do not have default values on all parameters, and they need values, we mark the build as not build at when level and we skip everything, to do that we check that the environment variable and the parameter has the same value, we also check that the mandatory parameters has value. stage( 'Prepare' ) { when { expression { def ret = ( "${env.PARAM01}" == "${params.PARAM01}" && "${params.PARAM01}" != "" && "${params.PARAM02}" != "" && "${params.PARAM03}" != "" ) if (!ret){ currentBuild.result = 'NOT_BUILT' currentBuild.description = "The build has been skipped" currentBuild.displayName = "#${BUILD_NUMBER}-(Skipped)" echo( "This build does not have valid parameters." ) } return ret } } stages { stage( 'checkout' ){ .... } stage( 'lint' ){ .... } stage( 'build' ){ .... } stage( 'test' ){ .... } } }

          Felipe Santos added a comment - - edited

          I think that's not similar... you don't fix the environment variables, you skip the build if they're not fixed.

          The workaround I proposed actually fixes the environment variables to point to the defaultValue of the parameters, so you don't need to skip the build.

          Felipe Santos added a comment - - edited I think that's not similar... you don't fix the environment variables, you skip the build if they're not fixed. The workaround I proposed actually fixes the environment variables to point to the defaultValue of the parameters, so you don't need to skip the build.

          Tim Black added a comment - - edited

          We have implemented a workaround to the core problem, similar to brunni's above, and it's been working well for some months, but now we're interested in making this a bit more seamless, and less confusing, for our users. (We can't expect all software developers using a Jenkin-based build system to know all of Jenkins' quirks, and this, IMO, is one of Jenkins' worst dirty little secrets.)

          So far, what we've done is add this first stage to our pipeline:

                  stage("Loading Parameters on First Build") {
                      when { expression { env.BUILD_NUMBER == '1' } }
                      // Because this: https://issues.jenkins-ci.org/browse/JENKINS-41929
                      steps {
                          script {
                              if (currentBuild.getBuildCauses('hudson.model.Cause$UserIdCause') != null) {
                                  currentBuild.displayName = 'Parameter Initialization'
                                  currentBuild.description = 'On first build we just load the parameters as they are not available of first run on new branches.  A second run has been triggered automatically.'
                                  currentBuild.result = 'ABORTED'
                                
                                	// Re-run this job (correct any /'s) but don't wait...
                                	build job: env.BRANCH_NAME.replace("/","%2F"), wait: false
                                
                                  error('Stopping initial manually-triggered build as we only want to get the parameters')
                              }
                          }
                      }
                  }
          

          Automatically triggering build #2 helps, but this approach has 2 problems:

          1. it's still kind of confusing to users what happened in build #1 and why it was necessary
          2. we use Jenkins Build # in our semantic versioning scheme, so there can never be a Version a.b.c.1. Versions always start with a.b.c.2.

          From my perspective, Jobs that have params defined in Pipeline, Build #1 can always be thrown away. So, my new workaround proposal, which may or may not be applied to any real solution under-Jenkins'-hood, is to modify the above stage to delete Build #1 (from disk) and set the next build number to 1 before triggering itself again.

          I haven't implemented/tested this myself yet, but wanted to ask here: can this be done safely? I'm not using the Next Build Number plugin, and don't plan to, because _complexity _and to my knowledge it can only handle monotonically increasing build numbers. I'm thinking something along these lines in a script {} block:

          def job = Jenkins.instance.getItem(jobName)
          job.getBuild(1).delete()  # how dangerous is this? Would it be better/worse to just delete jobs/jobName/builds/1 on disk?? Why?
          job.nextBuildNumber = 1
          

          Thanks for your time..

          Tim Black added a comment - - edited We have implemented a workaround to the core problem, similar to brunni 's above, and it's been working well for some months, but now we're interested in making this a bit more seamless, and less confusing , for our users. (We can't expect all software developers using a Jenkin-based build system to know all of Jenkins' quirks, and this, IMO, is one of Jenkins' worst dirty little secrets.) So far, what we've done is add this first stage to our pipeline: stage( "Loading Parameters on First Build" ) { when { expression { env.BUILD_NUMBER == '1' } } // Because this : https://issues.jenkins-ci.org/browse/JENKINS-41929 steps { script { if (currentBuild.getBuildCauses( 'hudson.model.Cause$UserIdCause' ) != null ) { currentBuild.displayName = 'Parameter Initialization' currentBuild.description = 'On first build we just load the parameters as they are not available of first run on new branches. A second run has been triggered automatically.' currentBuild.result = 'ABORTED' // Re-run this job (correct any / 's) but don' t wait... build job: env.BRANCH_NAME.replace( "/" , "%2F" ), wait: false error( 'Stopping initial manually-triggered build as we only want to get the parameters' ) } } } } Automatically triggering build #2 helps, but this approach has 2 problems: it's still kind of confusing to users what happened in build #1 and why it was necessary we use Jenkins Build # in our semantic versioning scheme, so there can never be a Version a.b.c.1. Versions always start with a.b.c.2. From my perspective, Jobs that have params defined in Pipeline, Build #1 can always be thrown away . So, my new workaround proposal, which may or may not be applied to any real solution under-Jenkins'-hood, is to modify the above stage to delete Build #1 (from disk) and set the next build number to 1 before triggering itself again . I haven't implemented/tested this myself yet, but wanted to ask here: can this be done safely? I'm not using the Next Build Number plugin, and don't plan to, because _complexity _and to my knowledge it can only handle monotonically increasing build numbers. I'm thinking something along these lines in a script {} block: def job = Jenkins.instance.getItem(jobName) job.getBuild(1).delete() # how dangerous is this ? Would it be better/worse to just delete jobs/jobName/builds/1 on disk?? Why? job.nextBuildNumber = 1 Thanks for your time..

          Felipe Santos added a comment -

          timblaktu, does not my suggestion solves your problem?

          Felipe Santos added a comment - timblaktu , does not my suggestion solves your problem?

          Tim Black added a comment -

          felipecassiors I like your slick 2-liner solution. I'm just not yet convinced that I'm ready to switch paradigms from using params to using environment variables in my pipelines.

          Tim Black added a comment - felipecassiors I like your slick 2-liner solution. I'm just not yet convinced that I'm ready to switch paradigms from using params to using environment variables in my pipelines.

          Tom added a comment -

          In terms of fixing this, how about an opt-in solution for well-behaved Jenkinsfiles (I guess enabled as a behaviour for multi-branch builds or similar)

          If set, it would update the build options and parameters:

          1. when a job (e.g. branch/PR) is first created
          2. when triggered manually using a 'Refresh pipeline from source' option or similar
          3. In an ideal world: whenever the pipeline file changes (i.e. by seeing if the Jenkinsfile has changed after a commit/poll and processing it before triggering the build) - this would fix the annoying off-by-one problem where get part of your pipeline state from the previous build.

          To run, the server would evaluate the Jenkins file, but not execute the node or stage parts (or even execute the DSL for them - the closures could be ignored). You'd just need a custom Groovy execution context to run the script in that just handled pipeline, parameters, options and top level properties. This should be possible for both declarative and scripted pipelines (for a scripted pipeline, you'd need a properties block at the top level, since node blocks would not be executed). Is there any security or other theoretical issue with executing the Jenkinsfile in this way, or is it very hard to implement?

          If it only worked with declarative pipelines, that would probably be fine for most people (including me).

          I think a 90% solution with some restrictions would be great, and a lot better than no solution!

           

          Tom added a comment - In terms of fixing this, how about an opt-in solution for well-behaved Jenkinsfiles (I guess enabled as a behaviour for multi-branch builds or similar) If set, it would update the build options and parameters: when a job (e.g. branch/PR) is first created when triggered manually using a 'Refresh pipeline from source' option or similar In an ideal world: whenever the pipeline file changes (i.e. by seeing if the Jenkinsfile has changed after a commit/poll and processing it before triggering the build) - this would fix the annoying off-by-one problem where get part of your pipeline state from the previous build. To run, the server would evaluate the Jenkins file, but not execute the node or stage parts (or even execute the DSL for them - the closures could be ignored). You'd just need a custom Groovy execution context to run the script in that just handled pipeline , parameters , options and top level properties . This should be possible for both declarative and scripted pipelines (for a scripted pipeline, you'd need a properties block at the top level, since node blocks would not be executed). Is there any security or other theoretical issue with executing the Jenkinsfile in this way, or is it very hard to implement? If it only worked with declarative pipelines, that would probably be fine for most people (including me). I think a 90% solution with some restrictions would be great, and a lot better than no solution!  

          marlene cote added a comment -

          Thank you Felipe! it works with the one caveat that the first build will always be the default choice. Less confusing for our developers, but it would be great to have a "real" solution.

          marlene cote added a comment - Thank you Felipe! it works with the one caveat that the first build will always be the default choice. Less confusing for our developers, but it would be great to have a "real" solution.

          Brandon Squizzato added a comment - - edited

          I'll also share the solution I came up with to deal with this.

          All of our Jenkinsfile pipeline jobs have a parameter defined in them called 'RELOAD'. This allows a user to check the box for 'RELOAD' and run the build. By default, its value is false.

          p = []
          // Add a param option for simply reloading this job
          p.add(
              [
                  $class: 'BooleanParameterDefinition',
                  name: "RELOAD", defaultValue: false, description: "Reload the job's config and quit"
              ]
          )
          
          // add other job parameters to 'p'...
          
          properties([parameters(p)])
          

           
          The Jenkinsfile pipeline jobs then later call a function that checks for the presence of this RELOAD parameter. This function should be called after your Jenkinsfile has defined job options/parameters/etc. but BEFORE the pipeline actually "builds" or "tests" anything. That way, if RELOAD is true, the build will update the job's configuration, but not actually run the job logic and just abort.

          def checkForReload() {
              // Exit the job if the "reload" box was checked
              if (params.RELOAD) {
                  echo "Job is configured to reload pipeline script and exit. Aborting."
                  currentBuild.description = "reload"
                  currentBuild.result = "ABORTED"
                  error("Job is configured to reload pipeline script and exit. Aborting.")
              }
          }
          

          On the JobDSL script that the seed job loads, we define the RELOAD parameter in JobDSL and set it to true by default. We then call 'queue' to cause the build to run one time when the seed job loads this config.

          pipelineJob("some-job") {
              // sets RELOAD=true for when the job is 'queued' below
              parameters {
                  booleanParam('RELOAD', true)
              }
          
              // other job config here
          
              // queue the job to run so it re-downloads its Jenkinsfile
              queue("some-job")
          }
          

          The end result is that the seed job will parse the JobDSL, run the build with RELOAD=true, which will cause the Jenkinsfile to get downloaded/executed – all of the properties/configs from the Jenkinsfile will be loaded – but then the 'checkForReload' function causes the job to abort before it actually runs the build/test. Since the Jenkinsfile also defines the RELOAD parameter with a default of false – subsequent runs of this build with NOT have 'RELOAD' checked unless a user manually checks the box.

          Brandon Squizzato added a comment - - edited I'll also share the solution I came up with to deal with this. All of our Jenkinsfile pipeline jobs have a parameter defined in them called 'RELOAD'. This allows a user to check the box for 'RELOAD' and run the build. By default, its value is false. p = [] // Add a param option for simply reloading this job p.add( [ $class: 'BooleanParameterDefinition' , name: "RELOAD" , defaultValue: false , description: "Reload the job's config and quit" ] ) // add other job parameters to 'p' ... properties([parameters(p)])   The Jenkinsfile pipeline jobs then later call a function that checks for the presence of this RELOAD parameter. This function should be called after your Jenkinsfile has defined job options/parameters/etc. but BEFORE the pipeline actually "builds" or "tests" anything. That way, if RELOAD is true, the build will update the job's configuration, but not actually run the job logic and just abort. def checkForReload() { // Exit the job if the "reload" box was checked if (params.RELOAD) { echo "Job is configured to reload pipeline script and exit. Aborting." currentBuild.description = "reload" currentBuild.result = "ABORTED" error( "Job is configured to reload pipeline script and exit. Aborting." ) } } On the JobDSL script that the seed job loads, we define the RELOAD parameter in JobDSL and set it to true  by default. We then call 'queue' to cause the build to run one time when the seed job loads this config. pipelineJob("some-job") { // sets RELOAD=true for when the job is 'queued' below parameters { booleanParam('RELOAD', true) } // other job config here // queue the job to run so it re-downloads its Jenkinsfile queue("some-job") } The end result is that the seed job will parse the JobDSL, run the build with RELOAD=true, which will cause the Jenkinsfile to get downloaded/executed – all of the properties/configs from the Jenkinsfile will be loaded – but then the 'checkForReload' function causes the job to abort before it actually runs the build/test. Since the Jenkinsfile also defines the RELOAD parameter with a default of false – subsequent runs of this build with NOT have 'RELOAD' checked unless a user manually checks the box.

          boris ivan added a comment -

          bsquizz that's the same thing we did too, though our parameter is "loadParamsAndAbort"

          boris ivan added a comment - bsquizz that's the same thing we did too, though our parameter is "loadParamsAndAbort"

          menna khaled added a comment -

          timblaktu I am trying your solution but I am facing an issue, env.BRANCH_NAME is null according to this: https://www.tikalk.com/posts/2017/05/21/how-to-evaluate-git-branch-name-in-a-jenkins-pipeline-using-gitscm/#:~:text=One%20of%20Jenkins%20environment%20variable%20is%20called%20BRANCH_NAME,you%20print%20it%2C%20the%20returned%20value%20is%20%E2%80%98null%E2%80%99. that behavior happens sometimes in pipeline jobs. 
          I understand you just want to rerun the job, so I use env.JOB_NAME and the job is rerun successfully but I am again not prompted to enter parameters. (the behavior we face in 'build now' rather than 'build with parameters')

          Could you please help?

          menna khaled added a comment - timblaktu  I am trying your solution but I am facing an issue, env.BRANCH_NAME is null according to this: https://www.tikalk.com/posts/2017/05/21/how-to-evaluate-git-branch-name-in-a-jenkins-pipeline-using-gitscm/#:~:text=One%20of%20Jenkins%20environment%20variable%20is%20called%20BRANCH_NAME,you%20print%20it%2C%20the%20returned%20value%20is%20%E2%80%98null%E2%80%99.  that behavior happens sometimes in pipeline jobs.  I understand you just want to rerun the job, so I use env.JOB_NAME and the job is rerun successfully but I am again not prompted to enter parameters. (the behavior we face in 'build now' rather than 'build with parameters') Could you please help?

          Tim Black added a comment -

          menna_khaled if your pipeline does not have `BRANCH_NAME` available, that feels like a problem with your Jenkins/plugin installation/versioning, and I cannot help you with that. I will say however that `BRANCH_NAME` in my example is only used to construct the name of the job to (re-)build using [the build step](https://www.jenkins.io/doc/pipeline/steps/pipeline-build-step/#build-build-a-job), so YMMV: 

           

          Tim Black added a comment - menna_khaled  if your pipeline does not have `BRANCH_NAME` available, that feels like a problem with your Jenkins/plugin installation/versioning, and I cannot help you with that. I will say however that `BRANCH_NAME` in my example is only used to construct the name of the job to (re-)build using [the build step] ( https://www.jenkins.io/doc/pipeline/steps/pipeline-build-step/#build-build-a-job ), so YMMV:   

          balee balee added a comment -

          It might sound a bit unorthodox, but really, isn't there a way to solve this properly (from the point of view of Jenkins users), without messy and inconvenient workarounds?

          For example, if a pipeline is saved (on the UI or with CLI or ...), it might be parsed automagically and look for parameters only and do the same with them as if the job was run? Or actually run the job on save but somehow limited to parameter properties only (and hide the run from the history)? Or...

          I'm sure it is not very simple to implement but it is quite a Major issue...

          balee balee added a comment - It might sound a bit unorthodox, but really, isn't there a way to solve this properly (from the point of view of Jenkins users), without messy and inconvenient workarounds? For example, if a pipeline is saved (on the UI or with CLI or ...), it might be parsed automagically and look for parameters only and do the same with them as if the job was run? Or actually run the job on save but somehow limited to parameter properties only (and hide the run from the history)? Or... I'm sure it is not very simple to implement but it is quite a Major issue...

          Jesse Glick added a comment -

          it might be parsed automagically and look for parameters only

          That is exactly what this issue proposes (for Declarative Pipeline).

          Jesse Glick added a comment - it might be parsed automagically and look for parameters only That is exactly what this issue proposes (for Declarative Pipeline).

          Miguel Costa added a comment -

          I guess there is no real chances this gets implemented since it's been more than 3 years since it's open?
          Is there any good way around this without failing the first build but also having to run it manually?

          Miguel Costa added a comment - I guess there is no real chances this gets implemented since it's been more than 3 years since it's open? Is there any good way around this without failing the first build but also having to run it manually?

          mcosta : From felipecassiors's comment above (slightly simplified), we add this to pipelines:

           

          stage('prepare') {
            steps {
              // Initialize params as envvars, workaround for bug https://issues.jenkins-ci.org/browse/JENKINS-41929
              script { params.each { k, v -> env[k] = v } }
            }
          }

           

          Quentin Nerden added a comment - mcosta : From felipecassiors 's comment above (slightly simplified), we add this to pipelines:   stage( 'prepare' ) { steps { // Initialize params as envvars, workaround for bug https://issues.jenkins-ci.org/browse/JENKINS-41929 script { params.each { k, v -> env[k] = v } } } }  

          Felipe Santos added a comment -

          qnerden slightly? This is much cleaner and better! Thanks a lot for sharing.

          One important thing to note is that none of the parameters should be included in `environment` section of the pipeline, otherwise changes made during a stage (like this 'prepare' one) won't be retained for all stages.

          Felipe Santos added a comment - qnerden  slightly? This is much cleaner and better! Thanks a lot for sharing. One important thing to note is that none of the parameters should be included in `environment` section of the pipeline, otherwise changes made during a stage (like this 'prepare' one) won't be retained for all stages.

          orasre added a comment - - edited

          Hi felipecassiors,

          I am having the same issue but using the above workaround didn't work. I have two parameters one of them is a choice that is populated by a list variable. The other one is an empty text field to be populated later. None of them can have a default value.

           

          Thanks

          orasre added a comment - - edited Hi felipecassiors , I am having the same issue but using the above workaround didn't work. I have two parameters one of them is a choice that is populated by a list variable. The other one is an empty text field to be populated later. None of them can have a default value.   Thanks

          Felipe Santos added a comment -

          Try with this:

          stage('prepare') {
            steps {
              // Initialize params as envvars, workaround for bug https://issues.jenkins-ci.org/browse/JENKINS-41929
              script { params.each { k, v -> env[k] = "${v}" } }
            }
          }
          

          Felipe Santos added a comment - Try with this: stage( 'prepare' ) { steps { // Initialize params as envvars, workaround for bug https://issues.jenkins-ci.org/browse/JENKINS-41929 script { params.each { k, v -> env[k] = "${v}" } } } }

          orasre added a comment -

          Thanks felipecassiors  for your quick reply. It didn't work too. In my case, I am using the seed job to update the Jenkins pipeline jobs. Once, I run the seed job, the "Build with Parameters" options disappears from the updated pipeline and I then am left with just "build"

           

          It is OK for most of my jobs to just start and stop a job manually to bring the "Build with Parameters" back but the problem is with the scheduled ones. They won't start unless I manually start/stop them first.

          orasre added a comment - Thanks felipecassiors   for your quick reply. It didn't work too. In my case, I am using the seed job to update the Jenkins pipeline jobs. Once, I run the seed job, the "Build with Parameters" options disappears from the updated pipeline and I then am left with just "build"   It is OK for most of my jobs to just start and stop a job manually to bring the "Build with Parameters" back but the problem is with the scheduled ones. They won't start unless I manually start/stop them first.

          menna khaled added a comment -

          hello jensre, what worked for me is: I get the job via API, I get the 'nextBuildNumber' field, then when I upload the jenkinsfile of a job I add to the jenkinsfile a check if the build number is equal to <nextBuildNumber> , then abort the build, this way the button is 'build with parameters' for all the upcoming builds.

          menna khaled added a comment - hello jensre , what worked for me is: I get the job via API, I get the 'nextBuildNumber' field, then when I upload the jenkinsfile of a job I add to the jenkinsfile a check if the build number is equal to <nextBuildNumber> , then abort the build, this way the button is 'build with parameters' for all the upcoming builds.

          Antoine added a comment -

          Hi jensre , in your case this is easier.

          In your seed job, include in your pipeline job definition the parameters:

          pipelineJob('YourJob')

          {     parameters \{stringParam('First param')}

              parameters {stringParam('Second param')}

          Then the "Build with Parameters" will be available after the seed job creates/updates your pipeline jobs.

          Antoine added a comment - Hi jensre , in your case this is easier. In your seed job, include in your pipeline job definition the parameters: pipelineJob('YourJob') {     parameters \{stringParam('First param')}     parameters {stringParam('Second param')} Then the "Build with Parameters" will be available after the seed job creates/updates your pipeline jobs.

          Diptiman added a comment - - edited

          Hi,

          I am using one work around to get rid of this issue. I thought of sharing this as it might help some folks who is dealing with this issue. Let's say I need to pass param1, param2 to the declarative pipeline what is used to generate the actual Jenkins pipeline. We can use temporary vars to store the actual value of parameters we want to pass for the pipelines to run and override the pipelines params just before the pipeline starts. 

          Below snippet is reference for storing the pipeline parameters into temp vars.

           

          public interface Params {
            String param1 = 'value1'
            String param2 = 'value2'
          } 
          
          pipeline {
          .
          .
          .
            parameters {
              string (
                      description: 'param1',
                      name: 'PARAM1',
                      trim: true,
                      defaultValue: Params.param1
               )
              string (
                      description: 'param2',
                      name: 'PARAM2',
                      trim: true,
                      defaultValue: Params.param2
               )

           

          Then we can override the params before the actual build starts to override the param value: 

           

          stage('Override Build params from SCM config') {
            steps {
              echo 'Initialize parameters as environment variables due to https://issues.jenkins-ci.org/browse/JENKINS-41929'
              echo "Debug: The value of param1 before ${params.param1}"
              script {
                env.param1 = Params.param1
                env.param2 = Params.param2
                echo "Debug: The value of param1 after ${param1} or ${params.param1}"
              }
            }
          } 

          There are couple of drawbacks for this solution:

          • We need to modify the param in two places in groovy script
          • For triggered pipelines it will still show the old param value from the previous config in the build param page

          Thanks

          Diptiman Adak

           

          Diptiman added a comment - - edited Hi, I am using one work around to get rid of this issue. I thought of sharing this as it might help some folks who is dealing with this issue. Let's say I need to pass param1, param2 to the declarative pipeline what is used to generate the actual Jenkins pipeline. We can use temporary vars to store the actual value of parameters we want to pass for the pipelines to run and override the pipelines params just before the pipeline starts.  Below snippet is reference for storing the pipeline parameters into temp vars.   public interface Params { String param1 = 'value1' String param2 = 'value2' } pipeline { . . . parameters { string ( description: 'param1' , name: 'PARAM1' , trim: true , defaultValue: Params.param1     ) string ( description: 'param2' , name: 'PARAM2' , trim: true , defaultValue: Params.param2     )   Then we can override the params before the actual build starts to override the param value:    stage( 'Override Build params from SCM config' ) { steps { echo 'Initialize parameters as environment variables due to https: //issues.jenkins-ci.org/browse/JENKINS-41929' echo "Debug: The value of param1 before ${params.param1}" script { env.param1 = Params.param1 env.param2 = Params.param2 echo "Debug: The value of param1 after ${param1} or ${params.param1}" } } } There are couple of drawbacks for this solution: We need to modify the param in two places in groovy script For triggered pipelines it will still show the old param value from the previous config in the build param page Thanks Diptiman Adak  

          Jesse Glick added a comment -

          Jesse Glick added a comment - Discussed in https://github.com/jenkins-infra/repository-permissions-updater/issues/3551#issuecomment-1750578563 .

            Unassigned Unassigned
            jglick Jesse Glick
            Votes:
            168 Vote for this issue
            Watchers:
            180 Start watching this issue

              Created:
              Updated: