I have upgraded to ssh-credentials version 1.14 which fixes SECURITY-440 / CVE-2018-1000601.

      After upgrading from version 1.13, no job could authenticate to Github, since the credentials was using a "private key file on master".

      According to the announcment:

      > Existing SSH credentials of these kinds are migrated to "directly entered" SSH credentials.

      This seems not to work for me. I do not see `SECURITY-440: Migrating FileOnMasterPrivateKeySource to DirectEntryPrivateKeySource` message in the logs and the "private key" input box of the credentials is just empty.

          [JENKINS-52232] Credentials not usable after upgrade to 1.14

          Wadeck Follonier added a comment - - edited

          jenkey do you have a backup of that instance, in order to try to reproduce the behavior ?

          If it's the case, could you try to restart the instance after the plugin update ?

          Could you also give us the version of your credentials plugin ?

          In order to reproduce also ourself the problem, did you upgrade the plugin using the update-center ? and then you restarted your instance ?

          Wadeck Follonier added a comment - - edited jenkey do you have a backup of that instance, in order to try to reproduce the behavior ? If it's the case, could you try to restart the instance after the plugin update ? Could you also give us the version of your credentials plugin ? In order to reproduce also ourself the problem, did you upgrade the plugin using the update-center ? and then you restarted your instance ?

          Claudio B added a comment -

          wfollonier, sorry I don't have a backup for this instance. In the meantime, I simply copied the private key from the file, pasted it into the input box and saved the credentials.

          I did install the plugin using the update-center, and I restarted Jenkins using the "Restart Jenkins if installation complete and no jobs are running" checkbox. Is that enough or should I ensure to restart the service next time?

          After the upgrade and realizing that all my jobs did not work anymore, I downgraded the ssh-credentials-plugin to version 1.13, restarted and all was working again.

          BTW, I am using the jenkins RPM from pkg.jenkins.io/redhat-stable.

          The credentials-plugin is at version 2.1.17.

          Claudio B added a comment - wfollonier , sorry I don't have a backup for this instance. In the meantime, I simply copied the private key from the file, pasted it into the input box and saved the credentials. I did install the plugin using the update-center, and I restarted Jenkins using the "Restart Jenkins if installation complete and no jobs are running" checkbox. Is that enough or should I ensure to restart the service next time? After the upgrade and realizing that all my jobs did not work anymore, I downgraded the ssh-credentials-plugin to version 1.13, restarted and all was working again. BTW, I am using the jenkins RPM from pkg.jenkins.io/redhat-stable. The credentials-plugin is at version 2.1.17.

          Stuart Whelan added a comment -

          We had exactly the same issue, we are on jenkins 2.107.3, we upgraded the plugin to 1.14 and found the only option we had was 'enter directly'. Downgrading the plugin to 1.13 restored the functionality.

          Stuart Whelan added a comment - We had exactly the same issue, we are on jenkins 2.107.3, we upgraded the plugin to 1.14 and found the only option we had was 'enter directly'. Downgrading the plugin to 1.13 restored the functionality.

          Jon Brohauge added a comment - - edited

          In [SECURITY-440 It is stated that 

          SSH Credentials Plugin no longer supports SSH credentials from files on the Jenkins master file system, neither user-specified file paths nor ~/.ssh. Existing SSH credentials of these kinds are migrated to "directly entered" SSH credentials.

          Thus you need to enter your ssh-key directly. We fixed this by entering our ssh-key into an Environment Variable and loading that into the credentials at boot time.

          BTW: The scope of the Credentials need to be SYSTEM not GLOBAL for this version (1.14) to work in "GitHub Organization" type jobs. Maybe this is required on other types of jobs as well.

          Jon Brohauge added a comment - - edited In  [SECURITY-440 It is stated that  SSH Credentials Plugin no longer supports SSH credentials from files on the Jenkins master file system, neither user-specified file paths nor  ~/.ssh . Existing SSH credentials of these kinds are migrated to "directly entered" SSH credentials. Thus you need to enter your ssh-key directly. We fixed this by entering our ssh-key into an Environment Variable and loading that into the credentials at boot time. BTW: The scope of the Credentials need to be SYSTEM not GLOBAL for this version (1.14) to work in "GitHub Organization" type jobs. Maybe this is required on other types of jobs as well.

          Devin Nusbaum added a comment -

          wfollonier took a look but was not able to reproduce this issue. As far as we know the migration should work correctly. If anyone can reproduce the migration not working please reopen the ticket and comment with the steps to reproduce the issue.

          Devin Nusbaum added a comment - wfollonier took a look but was not able to reproduce this issue. As far as we know the migration should work correctly. If anyone can reproduce the migration not working please reopen the ticket and comment with the steps to reproduce the issue.

          Devin Nusbaum added a comment -

          Looking at JENKINS-54746 I think the exception in this comment (users report seeing this in the old data monitor) is the root cause. The migration is gated by the RunScripts permission being active when that code runs, and that code is expected to run as ACL.System, but for some reason it ran as ACL.anonymous. When the exception is thrown in readResolve, perhaps the effect is that the serialized data is totally lost and it is as if no private keys were ever entered. Perhaps this code should be modified to instead just return if that permission is not found to avoid the conversion exception.

          Devin Nusbaum added a comment - Looking at JENKINS-54746 I think the exception in this comment (users report seeing this in the old data monitor) is the root cause. The migration is gated by the RunScripts permission being active when that code runs, and that code is expected to run as ACL.System , but for some reason it ran as ACL.anonymous . When the exception is thrown in readResolve , perhaps the effect is that the serialized data is totally lost and it is as if no private keys were ever entered. Perhaps this code should be modified to instead just return if that permission is not found to avoid the conversion exception.

          After the new tickets opened, I re-try to reproduce the case with the new information.

          The migration of the credentials keys was meant to be done after InitMilestone.JOB_LOADED is triggered, from credentials, in SystemCredentialsProvider.forceLoadDuringStartup(). At this point the running user should be SYSTEM. I discovered that the migration could be triggered before, when the credentials are stored in folder, or when they are used in the configuration of an agent (like ssh-agents). But at this point, both migration were done using SYSTEM in my case and so, passing with success the permission check on RUN_SCRIPTS that was added especially for the security patch.

          So now, to go further, I need to have the logs from people that encountered the issues. Especially if they can reproduce the case, perhaps they could provide more information about all the plugins, the log file, the config file of their Jenkins, or anything else that could be useful (be careful to not upload credentials in plain text). What are you using as AuthorizationStrategy ?

          My hypothesis is something force the current running user to be anonymous instead of System during the startup/migration, (even temporally) and call the credentials while loading.

          "Call for witnesses"
          => People from this: jenkey, jnz_topdanmark, stuartwhelan
          => People from JENKINS-54746: fbaeuerle, dpogue, pjaytycy, tom_ghyselinck, cdlee, mrozekma, bluehorn, k8wbkxwgtenhgnghfm9t, jrudolph, aarondmarasco_vsi, sintbert
          => People from JENKINS-55971: sgjenkins

          Wadeck Follonier added a comment - After the new tickets opened, I re-try to reproduce the case with the new information. The migration of the credentials keys was meant to be done after InitMilestone.JOB_LOADED is triggered, from credentials , in SystemCredentialsProvider.forceLoadDuringStartup() . At this point the running user should be SYSTEM . I discovered that the migration could be triggered before, when the credentials are stored in folder, or when they are used in the configuration of an agent (like ssh-agents). But at this point, both migration were done using SYSTEM in my case and so, passing with success the permission check on RUN_SCRIPTS that was added especially for the security patch. So now, to go further, I need to have the logs from people that encountered the issues. Especially if they can reproduce the case, perhaps they could provide more information about all the plugins, the log file, the config file of their Jenkins, or anything else that could be useful (be careful to not upload credentials in plain text). What are you using as AuthorizationStrategy ? My hypothesis is something force the current running user to be anonymous instead of System during the startup/migration, (even temporally) and call the credentials while loading. "Call for witnesses" => People from this: jenkey , jnz_topdanmark , stuartwhelan => People from JENKINS-54746 : fbaeuerle , dpogue , pjaytycy , tom_ghyselinck , cdlee , mrozekma , bluehorn , k8wbkxwgtenhgnghfm9t , jrudolph , aarondmarasco_vsi , sintbert => People from JENKINS-55971 : sgjenkins

          I had the same symptom - just been watching the issues here, have not tried re-updating since the last failure. In the first iteration, I didn't pay any active attention to log during upgrade itself, only noticed the resolve errors afterwards. 

          In my case, using project matrix auth. Will try to see if symptom reproduces and capture some additional logs for you if so. 

          Nathan Neulinger added a comment - I had the same symptom - just been watching the issues here, have not tried re-updating since the last failure. In the first iteration, I didn't pay any active attention to log during upgrade itself, only noticed the resolve errors afterwards.  In my case, using project matrix auth. Will try to see if symptom reproduces and capture some additional logs for you if so. 

          nneul That could be very very appreciated! I tried with project matrix auth without success (=migration was smooth). Could you give me insights about the structure of your instance ? Multiple folders ? restricted ones that are hiding some projects ? Credentials used at folders level ? Do you use internal credential provider or external ones ? Any information about your setup that is not "out of the box" will be useful

          Wadeck Follonier added a comment - nneul That could be very very appreciated! I tried with project matrix auth without success (=migration was smooth). Could you give me insights about the structure of your instance ? Multiple folders ? restricted ones that are hiding some projects ? Credentials used at folders level ? Do you use internal credential provider or external ones ? Any information about your setup that is not "out of the box" will be useful

          Gotta love when failure = working fine.

           

          No folders, all jobs visible to anonymous, but have a handful of jobs that are restricted from execution. Didn't look at credentials related to anything within the jobs - only for the agent connections. It's possible there was impact inside the jobs as well - didn't get that far when I saw the upgrade fail. 

          Aside from sitting behind apache, and being in a different directory structure the instance should be pretty vanilla. 

          Nathan Neulinger added a comment - Gotta love when failure = working fine.   No folders, all jobs visible to anonymous, but have a handful of jobs that are restricted from execution. Didn't look at credentials related to anything within the jobs - only for the agent connections. It's possible there was impact inside the jobs as well - didn't get that far when I saw the upgrade fail.  Aside from sitting behind apache, and being in a different directory structure the instance should be pretty vanilla. 

          For the credentials for me - it's almost entirely pre-existing use of ~jenkins/.ssh/id_rsa that was affected. Very little use of built-in creds - and none at all of that for agent connections. 

          Nathan Neulinger added a comment - For the credentials for me - it's almost entirely pre-existing use of ~jenkins/.ssh/id_rsa that was affected. Very little use of built-in creds - and none at all of that for agent connections. 

          This no longer happens for me, so I assumed it was found and fixed; I didn't do anything to try to fix it. You keep mentioning SYSTEM, so I guess I should point out that I was only seeing this on Linux nodes; the SSH private key used to login to them wasn't getting migrated correctly, so none of them would come up

          Michael Mrozek added a comment - This no longer happens for me, so I assumed it was found and fixed; I didn't do anything to try to fix it. You keep mentioning SYSTEM, so I guess I should point out that I was only seeing this on Linux nodes; the SSH private key used to login to them wasn't getting migrated correctly, so none of them would come up

          Aaron D. Marasco added a comment - - edited

          wfollonier sorry I don't have much to contribute, as noted in the other ticket, I got it working and went on my merry way. As nneul noted above - my setup was pretty much the same. The Jenkins user on the Linux server had the ssh keys outside of Jenkins itself (in a standard Unix manner) and I had to manually copy them into the GUI.

           

          Edit: For some reason it stripped the link from "other ticket" to https://issues.jenkins-ci.org/browse/JENKINS-54746?focusedCommentId=357252

          Aaron D. Marasco added a comment - - edited wfollonier sorry I don't have much to contribute, as noted in the other ticket, I got it working and went on my merry way. As nneul noted above - my setup was pretty much the same. The Jenkins user on the Linux server had the ssh keys outside of Jenkins itself (in a standard Unix manner) and I had to manually copy them into the GUI.   Edit: For some reason it stripped the link from "other ticket" to https://issues.jenkins-ci.org/browse/JENKINS-54746?focusedCommentId=357252

          Jon Brohauge added a comment -

          We usually don't do upgrades.
          By leveraging Docker, and JCasC, we rebuild from scratch, every time we need a new version Jenkins or a plugin. Having no state inside Jenkins, we can treat the containers as cattle. If it gets sick, here comes the bolt-pistol. As mentioned in my previous comment comment-343088, we fixed our issue by entering the SSH key "directly" and setting the proper scope.

          Jon Brohauge added a comment - We usually don't do upgrades. By leveraging Docker, and JCasC, we rebuild from scratch, every time we need a new version Jenkins or a plugin. Having no state inside Jenkins, we can treat the containers as cattle. If it gets sick, here comes the bolt-pistol. As mentioned in my previous comment comment-343088 , we fixed our issue by entering the SSH key "directly" and setting the proper scope.

          Had a chance to try this again, and cannot reproduce now - upgrade processed smoothly with no issues. Sorry I can't provide anything further. 

          Nathan Neulinger added a comment - Had a chance to try this again, and cannot reproduce now - upgrade processed smoothly with no issues. Sorry I can't provide anything further. 

          Alexey Vazhnov added a comment - - edited

          I've just installed fresh Jenkins and found I can't use SSH private key from Jenkins home directory, ~/.ssh/id_rsa. As workaround, I put SSH key into Jenkins Credentials, it works.

          • SSH Slaves v1.29.4,
          • SSH Credentials Plugin v1.16,
          • Jenkins v2.164.3,
          • host and slave OS: Ubuntu 18.04.2 with all updates,
          • OpenSSH v7.6p1.

          Update: found this:

          SSH Credentials Plugin no longer supports SSH credentials from files on the Jenkins master file system, neither user-specified file paths nor ~/.ssh. Existing SSH credentials of these kinds are migrated to "directly entered" SSH credentials.

          Alexey Vazhnov added a comment - - edited I've just installed fresh Jenkins and found I can't use SSH private key from Jenkins home directory, ~/.ssh/id_rsa . As workaround, I put SSH key into Jenkins Credentials, it works. SSH Slaves v1.29.4, SSH Credentials Plugin v1.16, Jenkins v2.164.3, host and slave OS: Ubuntu 18.04.2 with all updates, OpenSSH v7.6p1. Update : found this : SSH Credentials Plugin no longer supports SSH credentials from files on the Jenkins master file system, neither user-specified file paths nor ~/.ssh. Existing SSH credentials of these kinds are migrated to "directly entered" SSH credentials.

            Unassigned Unassigned
            jenkey Claudio B
            Votes:
            1 Vote for this issue
            Watchers:
            9 Start watching this issue

              Created:
              Updated: