I have upgraded to ssh-credentials version 1.14 which fixes SECURITY-440 / CVE-2018-1000601.

      After upgrading from version 1.13, no job could authenticate to Github, since the credentials was using a "private key file on master".

      According to the announcment:

      > Existing SSH credentials of these kinds are migrated to "directly entered" SSH credentials.

      This seems not to work for me. I do not see `SECURITY-440: Migrating FileOnMasterPrivateKeySource to DirectEntryPrivateKeySource` message in the logs and the "private key" input box of the credentials is just empty.

          [JENKINS-52232] Credentials not usable after upgrade to 1.14

          After the new tickets opened, I re-try to reproduce the case with the new information.

          The migration of the credentials keys was meant to be done after InitMilestone.JOB_LOADED is triggered, from credentials, in SystemCredentialsProvider.forceLoadDuringStartup(). At this point the running user should be SYSTEM. I discovered that the migration could be triggered before, when the credentials are stored in folder, or when they are used in the configuration of an agent (like ssh-agents). But at this point, both migration were done using SYSTEM in my case and so, passing with success the permission check on RUN_SCRIPTS that was added especially for the security patch.

          So now, to go further, I need to have the logs from people that encountered the issues. Especially if they can reproduce the case, perhaps they could provide more information about all the plugins, the log file, the config file of their Jenkins, or anything else that could be useful (be careful to not upload credentials in plain text). What are you using as AuthorizationStrategy ?

          My hypothesis is something force the current running user to be anonymous instead of System during the startup/migration, (even temporally) and call the credentials while loading.

          "Call for witnesses"
          => People from this: jenkey, jnz_topdanmark, stuartwhelan
          => People from JENKINS-54746: fbaeuerle, dpogue, pjaytycy, tom_ghyselinck, cdlee, mrozekma, bluehorn, k8wbkxwgtenhgnghfm9t, jrudolph, aarondmarasco_vsi, sintbert
          => People from JENKINS-55971: sgjenkins

          Wadeck Follonier added a comment - After the new tickets opened, I re-try to reproduce the case with the new information. The migration of the credentials keys was meant to be done after InitMilestone.JOB_LOADED is triggered, from credentials , in SystemCredentialsProvider.forceLoadDuringStartup() . At this point the running user should be SYSTEM . I discovered that the migration could be triggered before, when the credentials are stored in folder, or when they are used in the configuration of an agent (like ssh-agents). But at this point, both migration were done using SYSTEM in my case and so, passing with success the permission check on RUN_SCRIPTS that was added especially for the security patch. So now, to go further, I need to have the logs from people that encountered the issues. Especially if they can reproduce the case, perhaps they could provide more information about all the plugins, the log file, the config file of their Jenkins, or anything else that could be useful (be careful to not upload credentials in plain text). What are you using as AuthorizationStrategy ? My hypothesis is something force the current running user to be anonymous instead of System during the startup/migration, (even temporally) and call the credentials while loading. "Call for witnesses" => People from this: jenkey , jnz_topdanmark , stuartwhelan => People from JENKINS-54746 : fbaeuerle , dpogue , pjaytycy , tom_ghyselinck , cdlee , mrozekma , bluehorn , k8wbkxwgtenhgnghfm9t , jrudolph , aarondmarasco_vsi , sintbert => People from JENKINS-55971 : sgjenkins

          I had the same symptom - just been watching the issues here, have not tried re-updating since the last failure. In the first iteration, I didn't pay any active attention to log during upgrade itself, only noticed the resolve errors afterwards. 

          In my case, using project matrix auth. Will try to see if symptom reproduces and capture some additional logs for you if so. 

          Nathan Neulinger added a comment - I had the same symptom - just been watching the issues here, have not tried re-updating since the last failure. In the first iteration, I didn't pay any active attention to log during upgrade itself, only noticed the resolve errors afterwards.  In my case, using project matrix auth. Will try to see if symptom reproduces and capture some additional logs for you if so. 

          nneul That could be very very appreciated! I tried with project matrix auth without success (=migration was smooth). Could you give me insights about the structure of your instance ? Multiple folders ? restricted ones that are hiding some projects ? Credentials used at folders level ? Do you use internal credential provider or external ones ? Any information about your setup that is not "out of the box" will be useful

          Wadeck Follonier added a comment - nneul That could be very very appreciated! I tried with project matrix auth without success (=migration was smooth). Could you give me insights about the structure of your instance ? Multiple folders ? restricted ones that are hiding some projects ? Credentials used at folders level ? Do you use internal credential provider or external ones ? Any information about your setup that is not "out of the box" will be useful

          Gotta love when failure = working fine.

           

          No folders, all jobs visible to anonymous, but have a handful of jobs that are restricted from execution. Didn't look at credentials related to anything within the jobs - only for the agent connections. It's possible there was impact inside the jobs as well - didn't get that far when I saw the upgrade fail. 

          Aside from sitting behind apache, and being in a different directory structure the instance should be pretty vanilla. 

          Nathan Neulinger added a comment - Gotta love when failure = working fine.   No folders, all jobs visible to anonymous, but have a handful of jobs that are restricted from execution. Didn't look at credentials related to anything within the jobs - only for the agent connections. It's possible there was impact inside the jobs as well - didn't get that far when I saw the upgrade fail.  Aside from sitting behind apache, and being in a different directory structure the instance should be pretty vanilla. 

          For the credentials for me - it's almost entirely pre-existing use of ~jenkins/.ssh/id_rsa that was affected. Very little use of built-in creds - and none at all of that for agent connections. 

          Nathan Neulinger added a comment - For the credentials for me - it's almost entirely pre-existing use of ~jenkins/.ssh/id_rsa that was affected. Very little use of built-in creds - and none at all of that for agent connections. 

          This no longer happens for me, so I assumed it was found and fixed; I didn't do anything to try to fix it. You keep mentioning SYSTEM, so I guess I should point out that I was only seeing this on Linux nodes; the SSH private key used to login to them wasn't getting migrated correctly, so none of them would come up

          Michael Mrozek added a comment - This no longer happens for me, so I assumed it was found and fixed; I didn't do anything to try to fix it. You keep mentioning SYSTEM, so I guess I should point out that I was only seeing this on Linux nodes; the SSH private key used to login to them wasn't getting migrated correctly, so none of them would come up

          Aaron D. Marasco added a comment - - edited

          wfollonier sorry I don't have much to contribute, as noted in the other ticket, I got it working and went on my merry way. As nneul noted above - my setup was pretty much the same. The Jenkins user on the Linux server had the ssh keys outside of Jenkins itself (in a standard Unix manner) and I had to manually copy them into the GUI.

           

          Edit: For some reason it stripped the link from "other ticket" to https://issues.jenkins-ci.org/browse/JENKINS-54746?focusedCommentId=357252

          Aaron D. Marasco added a comment - - edited wfollonier sorry I don't have much to contribute, as noted in the other ticket, I got it working and went on my merry way. As nneul noted above - my setup was pretty much the same. The Jenkins user on the Linux server had the ssh keys outside of Jenkins itself (in a standard Unix manner) and I had to manually copy them into the GUI.   Edit: For some reason it stripped the link from "other ticket" to https://issues.jenkins-ci.org/browse/JENKINS-54746?focusedCommentId=357252

          Jon Brohauge added a comment -

          We usually don't do upgrades.
          By leveraging Docker, and JCasC, we rebuild from scratch, every time we need a new version Jenkins or a plugin. Having no state inside Jenkins, we can treat the containers as cattle. If it gets sick, here comes the bolt-pistol. As mentioned in my previous comment comment-343088, we fixed our issue by entering the SSH key "directly" and setting the proper scope.

          Jon Brohauge added a comment - We usually don't do upgrades. By leveraging Docker, and JCasC, we rebuild from scratch, every time we need a new version Jenkins or a plugin. Having no state inside Jenkins, we can treat the containers as cattle. If it gets sick, here comes the bolt-pistol. As mentioned in my previous comment comment-343088 , we fixed our issue by entering the SSH key "directly" and setting the proper scope.

          Had a chance to try this again, and cannot reproduce now - upgrade processed smoothly with no issues. Sorry I can't provide anything further. 

          Nathan Neulinger added a comment - Had a chance to try this again, and cannot reproduce now - upgrade processed smoothly with no issues. Sorry I can't provide anything further. 

          Alexey Vazhnov added a comment - - edited

          I've just installed fresh Jenkins and found I can't use SSH private key from Jenkins home directory, ~/.ssh/id_rsa. As workaround, I put SSH key into Jenkins Credentials, it works.

          • SSH Slaves v1.29.4,
          • SSH Credentials Plugin v1.16,
          • Jenkins v2.164.3,
          • host and slave OS: Ubuntu 18.04.2 with all updates,
          • OpenSSH v7.6p1.

          Update: found this:

          SSH Credentials Plugin no longer supports SSH credentials from files on the Jenkins master file system, neither user-specified file paths nor ~/.ssh. Existing SSH credentials of these kinds are migrated to "directly entered" SSH credentials.

          Alexey Vazhnov added a comment - - edited I've just installed fresh Jenkins and found I can't use SSH private key from Jenkins home directory, ~/.ssh/id_rsa . As workaround, I put SSH key into Jenkins Credentials, it works. SSH Slaves v1.29.4, SSH Credentials Plugin v1.16, Jenkins v2.164.3, host and slave OS: Ubuntu 18.04.2 with all updates, OpenSSH v7.6p1. Update : found this : SSH Credentials Plugin no longer supports SSH credentials from files on the Jenkins master file system, neither user-specified file paths nor ~/.ssh. Existing SSH credentials of these kinds are migrated to "directly entered" SSH credentials.

            Unassigned Unassigned
            jenkey Claudio B
            Votes:
            1 Vote for this issue
            Watchers:
            9 Start watching this issue

              Created:
              Updated: