-
Bug
-
Resolution: Unresolved
-
Major
-
None
-
Jenkins LTS 2.263.x
git and branch related packages updated 2021-02-22 to whatever is current AND applicable to 2.263.x baseline
CentOS 7 with SELinux enabled and enforcing
While setting up a Jenkins controller instance (and a worker using SSH Build Agent on same machine in different account) I've had some hurdles due to SELinux enabled and desired by machine's owners, so the majority of internet lore about disabling it wherever you see it did not apply Out of context here is probably allowing HTTP port usage for the Jenkins service, which was relatively easy.
The big problem is using git checkouts over ssh, using the SSH Credentials (name + private key + passphrase) saved in Jenkins Credential store. Configurations that healthily worked for me elsewhere, using the checkout([$class...]) magic (job needing to fetch from several repos and platforms), failed here with "Permission denied" and/or with "Check if you are allowed access" sort of messages.
I hope to post methodology details below, but in short it was a big hunt in the terminal with loops to ln the temporary files from the build workspace into a safe haven, and to detail the command lines and environment variables of ssh and git processes spawned during a failed job run. Eventually I could grab the files that Jenkins generates and reproduce the failure from command-line - just using the same ssh wrapping script, ssh-askpass script, password and private key files that Jenkins makes, and failing to just connect the ssh client to Git platform with no git involved.
One big discovery was that the internet lore is right Running setenforce 0 (as root) allows Jenkins git+ssh access to work as expected, as well as the manually-made reproduction for same sort of SSH access in shell. But that is not what the security-conscious customer wants.
As soon as we restore setenforce 1, ssh client fails again with "Permission denied". This was strace'd to be actually a local filesystem error in the openat() syscall, not a message from remote Git server about some key it won't trust (and actually does trust in other client conditions). It was also traced that while the Jenkins generated wrapper script calls ssh -i /path/to/temporary.key and tries to access that key, fails, and does not look into /.ssh/ directory (which is correct with that CLI option), placing a copy of the same key file (without a passphrase for simplicity) into /.ssh/ and not using the wrapper script worked to allow access. So it is the SELinux shims over filesystems...
A bit more googling, and I found that SE-cured SSH is aware of filesystem labels for the key files it uses - those few blogs that do not detail how to kill selinux, suggest to restorecon or explicitly chcon the ~/.ssh directory and its contents. This led me to relabeling the directory used by the Jenkins agent (and as a file owner, the unprivileged linux account of the build agent can do this):
chcon -R --type=ssh_home_t ~jenkins-worker/jenkins/workspace
This worked, to an extent that the job which starts from scratch with deleteDir() before checkouts, creates new ...@tmp subdirs with the label and launches SSH client that can read the key file.
It however fails further with launching the ssh-askpass helper (wrapper script) that would provide the passphrase; possibly SSH rejects it due to same labels that do not befit an executable that SE-cured SSH would be okay with running.
A short-term workaround at this point was to save into Jenkins Credentials a copy of the key without a passphrase.
Probably a proper solution would be to detect at run-time whether SELinux is a concern on the current system, and label just the newly made temporary private key file (hopefully that would suffice? or dedicate it a subdir like ~/.ssh/ is used in real life...) someplace around https://github.com/jenkinsci/git-client-plugin/blob/master/src/main/java/org/jenkinsci/plugins/gitclient/CliGitAPIImpl.java#L2072
On a side note, I also tried the SSH Agent (not that for the Build agents, but the one which launches ssh-agent and uses ssh-add to enroll known keys into it). This works well regardless of SELinux enforcement, and sh steps calling git program explicitly can use the key provided from Credentials this way. Even programs in terminal to which I export the same envvar values as I see in the job console (with job just sleeping inside the sshagent clause) can use this key. Only the pretty integrated sshagent{checkout} does not work, the git and ssh processes launched do not have these values in their /proc/PID/environ... They also do not have e.g. GIT_TRACE that I set in stage's environment clause, but they do have envvars made from build parameters - so I guess the context for inheritable env of a child process branches off sometime between start of job and start of stage, or the inherited environment for git launched by checkout() step is too sanitized...
I did not check jGit, never used it (knowingly at least) TBH, but I did not quickly find indications in the source that it would be messing with temporary files to pass key data - so maybe it is immune to the problem.