-
Bug
-
Resolution: Unresolved
-
Major
-
None
-
Jenkins ver. 2.53,
Build Pipeline Plugin 1.5.6
Pipeline
-
Powered by SuggestiMate
Steps to reproduce
- I have created Pipeline job
- During creation I checked up "This project is parameterized" checkbox and added two Choice parameters
- I have run the job and it failed
- I checked the configuration of the job and parameters are no longer there and "This project is parameterized" checkbox is no longer checked up.
[JENKINS-43758] Parameters disappear from pipeline job after running the job
HI,
Just found that all parameters described in Jenkins job ( not in pipeline) disappear when I use next section in pipeline:
options { buildDiscarder(logRotator(numToKeepStr: '5', artifactNumToKeepStr: '15')) timeout(time: 20, unit: 'MINUTES') timestamps() }
Looks like some conflict in logic. I don't mind to have all options and parameters described in pipeline code, but not all pluging supported there.
As of current versions of Declarative (1.1.6 or later), job properties (such as parameters) defined in the job config UI will not be nuked by use of options or triggers (and as of workflow-multibranch 2.16, the same thing is the case for the properties step). The first time you run a build with options/triggers/parameters in a Declarative Pipeline or properties in a Scripted Pipeline after upgrading, the job properties configured in the UI will still get wiped out, but every run after that (and any run of a new job or one that didn't already have job properties configured) will keep them.
How is this resolved?
- I generate a pipeline job with Job DSL plugin, with parameters.
- In the job's pipeline script, I configure the build discarder using properties{} closure.
- Job runs.
- Parameters are gone.
- I regenerate the job (parameters are back).
- Job runs
- Parameters are gone.
Is the solution to only configure parameters in the script? Should the build properties not be shown in the UI then? Behavior seems to be misleading.
What makes matters worse is that this doesn't always happen (or doesn't happen to all of my jobs), and I don't know why.
I'm reopening because this clearly still happens in my installation. I have core and all plugins from May 2018, way later than the previous comment. I'm not using Declarative.
As I mentioned, I don't know exactly under which conditions this occurs - it's pretty random.
Thanks akom for reopening this.
In my case when my pipeline job is triggered from upstream job parameters remains intact even if my job fails but when someone trigger the job manually then it loses the parameters.
displayName "Feature Tests" parameters { stringParam('tags', "@regression", 'Test tag to run') stringParam('timeout', "10", 'Test timeout in minutes') lo stringParam('environment', "ci", 'environment to run against') } }
I was managing to work around this issue by setting all properties I need at once in the properties{} closure (I generate jobs with Job DSL, and would rather have them set there). This was working fine, until I reached a total blocker: I cannot set the "Trigger builds remotely" token.
In other words, I generate a job that has the token set, but after running the job the token disappears, and there is no pipeline DSL for setting it.
Sorry, correction - even Job DSL plugin no longer handles "Trigger builds remotely" any more in pipelineJobs (seems to only apply to freestyleJob now), so it's not configurable either at generation time or at runtime by any means. I had to downgrade to Job DSL 1.69 due to this, see JENKINS-52743
Jenkins: 2.141, Pipeline-API: 2.29
BTW: I am not using Declarative.
I am having this problem with declarative pipelines.
pipeline { agent any options { ansiColor('xterm') // Prevent multiple pipelines from running concurrently and failing due to tfstate lock file disableConcurrentBuilds() } triggers { gitlab( branchFilterType: 'All', triggerOnPush: false, triggerOnMergeRequest: true, triggerOpenMergeRequestOnPush: "never", triggerOnNoteRequest: true, noteRegex: "jenkins rebuild", skipWorkInProgressMergeRequest: true, ) } ... }
All of my pipelines are auto-imported from a separate job using the Job DSL plugin. I need to manually run each job once in order for the config.xml to be populated with the settings from the Jenkinsfile. When I check the configuration in the UI after this initial run I see that the pipeline is configured with the settings shown above. After I trigger the pipeline with a test merge request in GitLab the pipeline will succeed, but the trigger settings will disappear in the UI. If I trigger the job again manually the trigger settings for GitLab webhooks will re-appear.
Jenkins 2.107.3
akom I am having the same problem where I have 2 jobs which have their own version of the same pipeline, are both treated exactly the same, and one has this problem and the other does not. It's a very frustrating issue because it's inconsistent.
I'm having same issue, here is my use case:
I have a seed job that loads all Jenkins declarative pipelines jobs to jenkins.
These pipelines have job parameters defined in their Jenkinsfile.
Each time the seed job runs, it overrides the job parameters and remove them all.
Next trigger of the build will reflect the parameters back to the job.
I had the same issue in which my parameters and GitHub Pull Request Builder settings would disappear after a job builds. I had to run my seed job to recreate the project using the Job DSL to get the settings back.
What worked for me to fix it was to delete the project then run the seed job. After that my settings stayed. There was probably an invalid config that was left around that caused the bug and deleting the project got rid of it.
dzizes972 I am using a workaround, but you may not like it.
(The following applies to traditional pipelines. For declarative, you may need to adjust a few things)
The workaround:
- You need to bottleneck all of your properties setting into a single call in the pipeline - you need set all of them always, not piecemeal. You can do it any way you like, I am including one approach below.
- In this call to the properties closure, you need to duplicate all the settings you've set when you initially created the job (any that you omit will be lost).
- Make sure that the rest of the code does not set individual properties again (or if it does, then it must set all of them in one call again)
My example:
- All my jobs are generated via the Job DSL plugin, and the initial values for job properties (including parameters) are set there. (The result is the same as creating the job by hand)
- In addition to the normal pipeline code, I insert a block that sets properties{} at the top, this block duplicates all initially configured options.
Since I'm using the Job DSL plugin, I have it prepend the pipeline code with an extra chunk that takes care of all that.
Here is a utility method I use in pipeline code. It covers all the properties that I ever set, and parameters (I only use string parameters) are supplied as an array in this format: ['NAME:DEFAULTVALUE:DESCRIPTION', etc]
/** * This exists primarily because of a bug in Jenkins pipeline that causes * any call to the "properties" closure to overwrite all job property settings, * not just the ones being set. Therefore, we set all properties that * the generator may have set when it generated this job (or a human). * * @param settingsOverrides a map, see defaults below. * @return */ def setJobProperties(Map settingsOverrides = [:]) { def settings = [discarder_builds_to_keep:'10', discarder_days_to_keep: '', cron: null, paramsList: [], upstreamTriggers: null, disableConcurrentBuilds: false] + settingsOverrides // echo "Setting job properties. discarder is '${settings.discarder_builds_to_keep}' and cron is '${settings.cron}' (${settings.cron?.getClass()})" def jobProperties = [ //these have to be strings: buildDiscarder(logRotator(artifactDaysToKeepStr: '', artifactNumToKeepStr: '', daysToKeepStr: "${settings.discarder_days_to_keep}", numToKeepStr: "${settings.discarder_builds_to_keep}")) ] if (settings.cron) { jobProperties << pipelineTriggers([cron(settings.cron)]) } if (settings.upstreamTriggers) { jobProperties << pipelineTriggers([upstream(settings.upstreamTriggers)]) } if (settings.disableConcurrentBuilds) { jobProperties << disableConcurrentBuilds() } if (settings.paramsList?.size() > 0) { def generatedParams = [] settings.paramsList.each { //params are specified as name:default:description def parts = it.split(':', 3).toList() //I need to honor all delimiters but I want a list generatedParams << string(name: "${parts[0]}", defaultValue: "${parts[1] ?: ''}", description: "${parts[2] ?: ''}", trim: true) } jobProperties << parameters(generatedParams) } echo "Setting job properties: ${jobProperties}" properties(jobProperties) }
So my job's pipeline definition looks like this:
setJobProperties( //each of these is optional, you may simply need the paramsList and that's it. discarder_builds_to_keep: "30", //cron: "", paramsList: ['SAMPLE_PARAM:apple:Some description'], //upstreamTriggers: 'some-job', //disableConcurrentBuilds: true ) //now regular pipeline code...
If this doesn't fit your situation, there are plenty of other ways, just make sure to follow the rules at the top.
I have the the same issue I have a pipeline job that is pulled from a Github repo and the script includes the following:
parameters
After I save the pipeline initially I am able to go back into the config and set the default value for my param. I then run the job once and everything runs great but when I access my pipeline again the default value I set in the UI for the param is gone. This happens every time I set the default value for the param in the UI and then run the job.
Thanks in advance.
*FIXED! - I had some other issues with pipelines not registering webhooks for 'GitHub hook trigger for GITScm polling'. I have one pipeline that does save parameters and works with webhook registering so I compared the pipelines and noticed that I wrote them differently. I then rewrote one of my pipelines to match the structure of the one that does work and now it seems that I am able to save parameters after a build!
akom Thank you for your comment - cleared up why I was having the same issue, using all .groovy pipelines and our parameters were being wiped out by a subsequent "parameters([disableConcurrentBuilds()])".
I have not seen an official roadmap for bugfixes (is there one?) Either way I have upvoted this issue, it has (and suspect will again) caused confusion and problems for me.
Last comment/question - is there any way to read the properties LinkedHashMap? I would like to expand on akom's workaround - read/store the live properties, add/modify one then write them back into the actual live object; this way I wouldn't have to force others to use my processes in the pipeline (I develop libraries for other developers to use in their builds, and need to set SOME properties, but don't want to stomp on theirs). Alternately providing a "properties <<" or a "properties.append([somepropertylist])" might be less destructive to other people already using the current behavior to intentionally wipe previous properties.
Just stumbled on this bug as well, wondering why I lose my JobDSL parameters after a run. akom has a great workaround above, but I really need what mrysanek is asking for - a way to get the current properties so I can add to them.
aarondmarasco_vsi, to my knowledge there is no way to get current job properties in the format suitable for properties{}.
The only way I can see of doing this would be to access the individual getters on currentBuild.rawBuild.parent (an instance of WorkflowRun ) and then transform their current values into arguments to properties{}. This would certainly be brittle, and if you use Script Security, this will require approval.
abayer Any update on this issue? was there any upgrade in the plugin to fix this issue, Or do we need to go with Alexander's Wrokaround? I tried with latest JOB DSL 1.74 plugin as well it is still having the issue. Please do update the roadmap/ fix
So if you're not using Job DSL, please open a separate JIRA. If you're using the properties step in Scripted Pipeline or the parameters directive in Declarative, those do try to preserve job properties and build parameters defined outside of the pipeline, but Job DSL is still going to wipe out whatever is in the properties and parameters when it runs its seed job. Also, don't ever call properties step more than once in a pipeline - you're gonna run into a bunch of potential pitfalls there.
Annother super annoyed user here (sorry to say that, but that's just the truth).
We are setting up most of our jobs via JCasC (which wraps JobDSL) and every single time we execute our JCasC yaml files, all properties that are defined by the respective pipeline scripts are lost: parameters, triggers, sidebar links etc.
Losing parameters of jobs that are triggered not by human project members but by other systems/scripts (e.g. Pull Request Notifier for Bitbucket Server) is especially painful.
Those jobs frequently triggered by human project members will sooner or later re-receive their parameters because someone will just click "Build Now" eventually but those jobs triggered from outside will just never run (rejected because of "unknown" parameters?).
Every single time we execute our JCasC scripts we have to go through a list of jobs and "fix" them by clicking "Build Now". Yes, we could write a script for that but some jobs don't have parameters.
Instead they need to have their scm-polling re-initialized. Since some of those jobs run for many hours, so we need to abort them right away. Writing a script for all those cases feels like investing too much time on the wrong end of the problem.
I am willing to contribute a fix but where to start? What is the right approach? Should we start with an opt-in to preserve (instead of wipe) parameters, triggers etc.?
Like others have mentioned here, it would be very useful if we could append to existing properties inside a pipeline script.
We should be able to run `properties` more than once with an append option.
Should I open a separate issue specifically to track this request?
We struggled with this problem also. For now, we wrote a little script to start jobs and put it in /var/jenkins_home/init.groovy.d folder which contains groovy scripts that are executed after Jenkins docker instance started. After that, we added a stage to pipeline script of relevant job to stop it after Jenkins restarted. I know this hacky but at least works for now until this issue resolved.
Additionally, using queue function of jobDsl didn't work sometimes, so we gave up using it. It seems, there is a race condition issue.
famod I'm facing exactly the same issue, may I know if you have found any solution?
Hey Team, Is there a fix or workaround is available for this?
I have created a workaround based on the approach taken from this post:
https://issues.jenkins.io/browse/JENKINS-44681?focusedCommentId=304082&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-304082
The script saves the existing properties of a pipeline job if there are any and then recreates them in JobDSL block.
import jenkins.model.Jenkins import hudson.model.Item import hudson.model.Items def jobProperties Item currentJob = Jenkins.instance.getItemByFullName('_test') if (currentJob) { jobProperties = currentJob.@properties } pipelineJob('_test') { displayName('_test') description('Test job') disabled(false) definition { cpsScm { scm { git { remote { url('***') credentials('***') } branch('master') extensions { } } } scriptPath('***') } } if (jobProperties) { configure { root -> def properties = root / 'properties' jobProperties.each { property -> String xml = Items.XSTREAM2.toXML(property) def jobPropertiesPropertyNode = new XmlParser().parseText(xml) properties << jobPropertiesPropertyNode } } } }
Ok, this is really major for us so I made a bunch of tests to try reproduce the issue.
All based on a declarative pipeline script, script coming from SCM (git), containing "options
{ skipDefaultCheckout() }"
1. Re-adding manually the parameters solve the issue
If:
- I create a new pipeline with "COPY FROM"
- I trigger it manually, the job deletes my parameters configured in job UI
- Then I restore my configuration via the "job config history" plugin
- I trigger again, parameters disappear again
- over and over and over ...
However, as soon as I re-enter manually the parameters (not via the config history), all good, parameters do not disappear anymore.
2. Issue when "COPY FROM"
If I create a new pipeline with "COPY_FROM", I have the issue.
If I create a fully new pipeline, adding exactly the same settings, including the same pipeline script coming from same git repository, same tag. No issues...
So I guess this is coming from merge conflict issues of the job configuration.
It seems random because it depends of the last change date.
Workarounds here will re-add the properties with latest change date.
Does that make any sense?
I'm facing the same issue. I tried tsurankov approach but it didn't work for me.
This is how my job(s) looks like:
import javaposse.jobdsl.dsl.DslFactory def repositories = [ [ id : 'jenkins-test', name : 'jenkins-test', displayName: 'Jenkins Test', repo : 'ssh://<JENKINS_BASE_URL>/<PROJECT_SLUG>/jenkins-test.git' ] ] DslFactory dslFactory = this as DslFactory repositories.each { repository -> pipelineJob(repository.name) { parameters { stringParam("BRANCH", "master", "") } logRotator{ numToKeep(30) } authenticationToken('<TOKEN_MATCHES_WITH_THE_BITBUCKET_POST_RECEIVE_HOOK>') displayName(repository.displayName) description("Builds deploy pipelines for ${repository.displayName}") definition { cpsScm { scm { git { branch('${BRANCH}') remote { url(repository.repo) credentials('<CREDENTIAL_NAME>') } extensions { localBranch('${BRANCH}') wipeOutWorkspace() cloneOptions { noTags(false) } } } scriptPath('Jenkinsfile) } } } } }
This works as expected for the first time. But after the build is triggered, the parameters disappear.
I've tried a lot of things but nothing worked so far. Would appreciate any help on this. Thanks.
I generate the Pipeline Job with DSL and the Pipeline Job has parameters, and properties defined. The Pipeline script does not define parameters. In my case it includes a groovy library which does reference parameters and environment (properties) from the job:
String email = pipeline.params.email.trim() // and pipeline.archiveArtifacts artifacts: pipeline.env.STAGE_NAME + '/*.out, '
If the library fails with error, the then parameters are still defined in the Pipeline Job definition.
But once the pipeline starts running, the parameters and properties on the pipline job definition are gone.
They are there in the first run, but the job def has already been wiped of parameter and properties.
I tried switching to properties([ parameters([]) ]) and that didn't work at first.
Then at the bottom of my pipeline I saw some additional "properties([])" lines in later stages. I removed those. and params seem be staying in the UI.
At this point I am reluctant to touch it to try the more standard "parameters { }" definition.
Hi,
I have the same issue. When job finished successfully there is no such issue. After failure all defined parameters are gone.
I tried to add at the beginning of job import hudson.mode.* - didn't help. I t help only if I define parameters in pipeline code - but I cannot use Active Choice Reactive Parameter there. Here is the pipeline code: