Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-41891

Serve static files from second domain as an alternative to setting CSP

    • Icon: New Feature New Feature
    • Resolution: Fixed
    • Icon: Major Major
    • core
    • jenkins-2.200

      Dealing with Content-Security-Policy is just too annoying, and there's too many plugins trying to just serve static files in Jenkins, often for no real reason.

      We need second domain support for static resources (DirectoryBrowserSupport) such that accessing that is possible without authentication, just with a token, and that token is used for linked resources as well.

          [JENKINS-41891] Serve static files from second domain as an alternative to setting CSP

          Matt Sicker added a comment -

          Now when you say second domain, can you clarify on the expected scope here? Here are some potential scope options:

          • Support multiple domains via multiple web apps (i.e., keep Jenkins as one war, and have another war for handling static assets and access control)
          • Support multiple domains via fancy Apache configs
          • Support multiple domains where the static domain uses a dedicated web server like Apache or nginx (along with any config needed to allow for access control)
          • Support multiple domains via CDN

          Another orthogonal concern: using subdomains of the same domain versus completely separate domains (though since many static assets require authorization, the usual benefits of splitting up your CDN domain name from your app domain name don't apply; we still need the cookies).

          Matt Sicker added a comment - Now when you say second domain, can you clarify on the expected scope here? Here are some potential scope options: Support multiple domains via multiple web apps (i.e., keep Jenkins as one war, and have another war for handling static assets and access control) Support multiple domains via fancy Apache configs Support multiple domains where the static domain uses a dedicated web server like Apache or nginx (along with any config needed to allow for access control) Support multiple domains via CDN Another orthogonal concern: using subdomains of the same domain versus completely separate domains (though since many static assets require authorization, the usual benefits of splitting up your CDN domain name from your app domain name don't apply; we still need the cookies).

          Matt Sicker added a comment -

          Oh, I suppose maybe there's a fifth option:

          • VirtualHost-style support in Winstone. Avoids the need for a reverse proxy to combine domains in simple scenarios as well as duplicating the servlet container. I'm not sure if this is viable depending on Winstone/Jetty features.

          Matt Sicker added a comment - Oh, I suppose maybe there's a fifth option: VirtualHost-style support in Winstone. Avoids the need for a reverse proxy to combine domains in simple scenarios as well as duplicating the servlet container. I'm not sure if this is viable depending on Winstone/Jetty features.

          Matt Sicker added a comment -

          Another option: when using Kubernetes, this is just an exercise in devops to rewrite ingress rules based on paths.

          Matt Sicker added a comment - Another option: when using Kubernetes, this is just an exercise in devops to rewrite ingress rules based on paths.

          Daniel Beck added a comment - - edited

          This is specifically about the functionality in DirectoryBrowserSupport that is affected by CSP from SECURITY-95 and breaks many plugins that follow the (anti)pattern of archiving a bunch of HTML files, then serving them via DirectoryBrowserSupport. We even had plugins programmatically disable CSP protection (SECURITY-309).

          Ideally we figure out a way for Jenkins/Stapler to respond differently for a different domain ( Host header) and implement something like github.com/githubusercontent.com on that domain.

          Ideally one DirectoryBrowserSupport would correspond to one random prefix (necessary since there would be no auth on the second domain to hijack), to not break relative links within a set of archived files, such as an archived set of Javadoc HTML files.

          Daniel Beck added a comment - - edited This is specifically about the functionality in DirectoryBrowserSupport that is affected by CSP from SECURITY-95 and breaks many plugins that follow the (anti)pattern of archiving a bunch of HTML files, then serving them via DirectoryBrowserSupport . We even had plugins programmatically disable CSP protection (SECURITY-309). Ideally we figure out a way for Jenkins/Stapler to respond differently for a different domain ( Host header) and implement something like github.com/githubusercontent.com on that domain. Ideally one DirectoryBrowserSupport would correspond to one random prefix (necessary since there would be no auth on the second domain to hijack), to not break relative links within a set of archived files, such as an archived set of Javadoc HTML files.

          Jesse Glick added a comment -

          a way for Jenkins/Stapler to respond differently for a different domain (Host header)

          No deep surgery in Stapler is really necessary, I think. Would suffice for Jenkins to define an UnprotectedRootAction to serve the static content. Then you can configure your reverse proxy to map requests to the special domain to a path prefix of the Jenkins service. For example, this is straightforward to set up in Kubernetes using the nginx-ingress controller.

          As to handling non-anonymously-readable content, this can be handled in various ways. Probably something like BoundObjectTable with a one-hour expiry would suffice. (To save memory and allow links to be valid indefinitely so long as the user exists and retains access, perhaps you could support WithWellKnownURL, by encoding both a path from root to the DirectoryBrowserSupport.owner (via JENKINS-26091) and an Authentication.name in a Secret. Given a random initialization vector, I believe that is safe.)

          The critical question for me is what constraints are placed on the “second domain” by the non-CSP defenses built into browsers. This very much affects whether administrators will find it practical to set up such a route: the reverse proxy is just a matter of configuration, but getting a new or expanded DNS entry typically requires extra steps. For example, if Jenkins is normally served from https://dev.mycorp.com/jenkins/ then which of the following URL prefixes would be eligible for serving static content?

          • https://dev.mycorp.com/static-jenkins/
          • https://static.dev.mycorp.com/
          • https://static-dev.mycorp.com/
          • https://mycorp-static.com/
          • https://static.mycorp.net/

          Jesse Glick added a comment - a way for Jenkins/Stapler to respond differently for a different domain ( Host header) No deep surgery in Stapler is really necessary, I think. Would suffice for Jenkins to define an UnprotectedRootAction to serve the static content. Then you can configure your reverse proxy to map requests to the special domain to a path prefix of the Jenkins service. For example, this is straightforward to set up in Kubernetes using the nginx-ingress controller. As to handling non-anonymously-readable content, this can be handled in various ways. Probably something like BoundObjectTable with a one-hour expiry would suffice. (To save memory and allow links to be valid indefinitely so long as the user exists and retains access, perhaps you could support WithWellKnownURL , by encoding both a path from root to the DirectoryBrowserSupport.owner (via JENKINS-26091 ) and an Authentication.name in a Secret . Given a random initialization vector, I believe that is safe.) The critical question for me is what constraints are placed on the “second domain” by the non-CSP defenses built into browsers. This very much affects whether administrators will find it practical to set up such a route: the reverse proxy is just a matter of configuration, but getting a new or expanded DNS entry typically requires extra steps. For example, if Jenkins is normally served from https: //dev.mycorp.com/jenkins/ then which of the following URL prefixes would be eligible for serving static content? https: //dev.mycorp.com/static-jenkins/ https: //static.dev.mycorp.com/ https: //static-dev.mycorp.com/ https: //mycorp-static.com/ https: //static.mycorp.net/

          The frame-ancestors directive of Content-Security-Policy cannot distinguish between https://dev.mycorp.com/jenkins/ and https://dev.mycorp.com/static-jenkins/. See CSP: frame-ancestors should check origins, not URLs · Issue #311 · w3c/webappsec.

          The HTTP cookies set by Jenkins seem to be using HostOnly and HttpOnly, except the "screenResolution" cookie. I think this makes https://static.dev.mycorp.com/ and https://static-dev.mycorp.com/ less risky than they might otherwise be.

          Kalle Niemitalo added a comment - The frame-ancestors directive of Content-Security-Policy cannot distinguish between https: //dev.mycorp.com/jenkins/ and https: //dev.mycorp.com/static-jenkins/ . See CSP: frame-ancestors should check origins, not URLs · Issue #311 · w3c/webappsec . The HTTP cookies set by Jenkins seem to be using HostOnly and HttpOnly, except the "screenResolution" cookie. I think this makes https: //static.dev.mycorp.com/ and https: //static-dev.mycorp.com/ less risky than they might otherwise be.

          Matt Sicker added a comment -

          I've read through some of the older issues related to this. So it seems like the main purpose of this feature request is to allow for the following scenario:

          As a plugin developer, I want a safe place to publish static assets on a separate domain so that a content security policy can be used to help prevent published content from interacting with and exploiting Jenkins itself. For example, if I wanted to publish some test results that have a fancy JavaScript-based UI, it would be nice to host that on its own domain so that it can't interact with Jenkins JavaScript files or similar. This would also be useful to avoid a vector for exploiting XSS vulnerabilities in Jenkins.

          Based on how CSP works as kon mentions, we have to distinguish based on the domain name mostly, not the URL (other than the scheme and port). So I'd imagine we should try to support all the listed suggestions from jglick except for the subdirectory one.

          Any directory browser that requires authentication wouldn't really benefit from a separate domain name like example-cdn.com to example.com due to needing cookies for both. Static resources that are publicly available can benefit from a separate domain for CDN usage, though that seems a bit overkill for Jenkins (or maybe it isn't?).

          Matt Sicker added a comment - I've read through some of the older issues related to this. So it seems like the main purpose of this feature request is to allow for the following scenario: As a plugin developer, I want a safe place to publish static assets on a separate domain so that a content security policy can be used to help prevent published content from interacting with and exploiting Jenkins itself. For example, if I wanted to publish some test results that have a fancy JavaScript-based UI, it would be nice to host that on its own domain so that it can't interact with Jenkins JavaScript files or similar. This would also be useful to avoid a vector for exploiting XSS vulnerabilities in Jenkins. Based on how CSP works as kon mentions, we have to distinguish based on the domain name mostly, not the URL (other than the scheme and port). So I'd imagine we should try to support all the listed suggestions from jglick except for the subdirectory one. Any directory browser that requires authentication wouldn't really benefit from a separate domain name like example-cdn.com to example.com due to needing cookies for both. Static resources that are publicly available can benefit from a separate domain for CDN usage, though that seems a bit overkill for Jenkins (or maybe it isn't?).

          Jesse Glick added a comment -

          As a plugin developer, I want a safe place to publish static assets

          Well, yes, but also we want admins to stop recklessly disabling CSP because of the many things that inevitably break when it is enabled and for which Googling the error message gives you bad advice.

          I'd imagine we should try to support all the listed suggestions from Jesse Glick

          Maybe I should clarify: the list is in order from easiest to configure to hardest to configure (in general). So if it is seems that merely having a nonequal host suffices for protection, then https://static.dev.mycorp.com/ would be the most attractive option as it only requires that your DNS grant for Jenkins accepts wildcards, which it may already. The code in Jenkins need not care at all which host you choose, but we need to have a canonical recommendation for the reverse proxy that is likely to be implementable.

          Any directory browser that requires authentication

          See above. danielbeck and I are both assuming that the feature is fully usable when the DirectoryBrowserSupport.owner is accessible only to certain authenticated users, because the static “site” is serving content only from specially constructed URLs that encode sufficient credentials. See for example what GitHub does when showing a Raw link for a file in a private repository.

          Jesse Glick added a comment - As a plugin developer, I want a safe place to publish static assets Well, yes, but also we want admins to stop recklessly disabling CSP because of the many things that inevitably break when it is enabled and for which Googling the error message gives you bad advice. I'd imagine we should try to support all the listed suggestions from Jesse Glick Maybe I should clarify: the list is in order from easiest to configure to hardest to configure (in general). So if it is seems that merely having a nonequal host suffices for protection, then https: //static.dev.mycorp.com/ would be the most attractive option as it only requires that your DNS grant for Jenkins accepts wildcards, which it may already. The code in Jenkins need not care at all which host you choose, but we need to have a canonical recommendation for the reverse proxy that is likely to be implementable. Any directory browser that requires authentication See above. danielbeck and I are both assuming that the feature is fully usable when the DirectoryBrowserSupport.owner is accessible only to certain authenticated users, because the static “site” is serving content only from specially constructed URLs that encode sufficient credentials. See for example what GitHub does when showing a Raw link for a file in a private repository.

          Matt Sicker added a comment -

          I had a talk with Jesse to further hash out this idea. At the moment, the idea will be to create some sort of time-bound cache for static asset tokens to URLs where DirectoryBrowserSupport can be configured to only serve requests at a specific origin while all other requests are served from the Jenkins URL. When a secondary static domain is configured, the servlet filter should deny all requests to static assets unless they go through the configured domain, and requests to non-DBS pages must go through the root origin (possibly derived from Jenkins root URL config, though this might need to be a separate setting).

          Then it's a matter of setting up DNS appropriately to serve two domains to the same Jenkins instance and configuring the static origin settings. For Kubernetes-based setups, we might be able to provide or suggest some nginx-controller routing rules to support this. I'll likely be developing this feature using Apache as a reverse proxy, though I'll take a look into the K8s approach as well if it's not too complicated.

          Matt Sicker added a comment - I had a talk with Jesse to further hash out this idea. At the moment, the idea will be to create some sort of time-bound cache for static asset tokens to URLs where DirectoryBrowserSupport can be configured to only serve requests at a specific origin while all other requests are served from the Jenkins URL. When a secondary static domain is configured, the servlet filter should deny all requests to static assets unless they go through the configured domain, and requests to non-DBS pages must go through the root origin (possibly derived from Jenkins root URL config, though this might need to be a separate setting). Then it's a matter of setting up DNS appropriately to serve two domains to the same Jenkins instance and configuring the static origin settings. For Kubernetes-based setups, we might be able to provide or suggest some nginx-controller routing rules to support this. I'll likely be developing this feature using Apache as a reverse proxy, though I'll take a look into the K8s approach as well if it's not too complicated.

          Matt Sicker added a comment -

          To clarify on the GitHub example, here's a sample Jenkins analogue:

          Matt Sicker added a comment - To clarify on the GitHub example, here's a sample Jenkins analogue: Main site: https://jenkins.dev.example.com/ Original static content URL: https://jenkins.dev.example.com/userContent/foo/bar.zip Tokenized secondary domain URL: https://jenkins-static.dev.example.com/userContent/foo/bar.zip?token=ABC123 ...

          Daniel Beck added a comment -

          I would expect the same model (token query parameter) to not work as soon as you render HTML files that include resources like images or external style sheets, unless you start rewriting responses, checking referrers, or similar.

          Daniel Beck added a comment - I would expect the same model (token query parameter) to not work as soon as you render HTML files that include resources like images or external style sheets, unless you start rewriting responses, checking referrers, or similar.

          Jesse Glick added a comment -

          And that URL pattern would not work for UnprotectedRootAction anyway. I would expect something more along the lines of https://jenkins-static.dev.example.com/staticContent/ABC123/foo/bar.zip. Or for a more realistic transformation:

          • https://dev.example.com/jenkins/job/stuff/javadoc/com/corp/stuff/package-summary.html (default)
          • https://static.dev.example.com/jenkins/user-static-whatever/ABC123/com/corp/stuff/package-summary.html (with config set to static.dev.example.com)

          Here job/stuff/javadoc/ produces a DirectoryBrowserSupport in Stapler navigation, it gets bound to a table and assigned an ID, and then static content under that is served if the token matches.

          Jesse Glick added a comment - And that URL pattern would not work for UnprotectedRootAction anyway. I would expect something more along the lines of https: //jenkins-static.dev.example.com/staticContent/ABC123/foo/bar.zip . Or for a more realistic transformation: https: //dev.example.com/jenkins/job/stuff/javadoc/com/corp/stuff/package-summary.html (default) https: //static.dev.example.com/jenkins/user-static-whatever/ABC123/com/corp/stuff/package-summary.html (with config set to static.dev.example .com ) Here job/stuff/javadoc/ produces a DirectoryBrowserSupport in Stapler navigation, it gets bound to a table and assigned an ID, and then static content under that is served if the token matches.

          Daniel Beck added a comment -

          user-static-whatever would be the unprotected root action (since we're not changing the prefixes, just what subset of Jenkins gets served on each domain), and ABC123 the magic secret token.

          With https://static.dev.example.com/jenkins/user-static-whatever/ABC123 optionally redirecting to https://dev.example.com/jenkins/job/stuff/javadoc we have some convenient navigation between domains; alternatively, add an extra level for URLs like https://static.dev.example.com/jenkins/user-static-whatever/ABC123/root/com/corp/stuff/package-summary.html so that relative uplink to the job (or, more generally, Actionable, could work.

          Daniel Beck added a comment - user-static-whatever would be the unprotected root action (since we're not changing the prefixes, just what subset of Jenkins gets served on each domain), and ABC123 the magic secret token. With https://static.dev.example.com/jenkins/user-static-whatever/ABC123 optionally redirecting to https://dev.example.com/jenkins/job/stuff/javadoc we have some convenient navigation between domains; alternatively, add an extra level for URLs like https://static.dev.example.com/jenkins/user-static-whatever/ABC123/root/com/corp/stuff/package-summary.html so that relative uplink to the job (or, more generally, Actionable , could work.

          Matt Sicker added a comment -

          I have a general idea about how to manage the tokens for this. Are you suggesting the token should be in the path instead of as a query parameter?

          Matt Sicker added a comment - I have a general idea about how to manage the tokens for this. Are you suggesting the token should be in the path instead of as a query parameter?

          Daniel Beck added a comment -

          That's exactly what we're saying. It's basically a getDynamic(String) on an UnprotectedRootAction.

          Daniel Beck added a comment - That's exactly what we're saying. It's basically a getDynamic(String) on an UnprotectedRootAction .

          Jesse Glick added a comment -

          relative uplink to the job

          Are there really any use cases for this? Typically we are serving content in this way because it was generated by some external tool with no knowledge that it is being displayed from Jenkins. Are there plugins which generate files in an untrusted way and then use DirectoryBrowserSupport to display them, while linking to ../? And this could at best work for a single level up, if I understand what you propose.

          Jesse Glick added a comment - relative uplink to the job Are there really any use cases for this? Typically we are serving content in this way because it was generated by some external tool with no knowledge that it is being displayed from Jenkins. Are there plugins which generate files in an untrusted way and then use DirectoryBrowserSupport to display them, while linking to ../ ? And this could at best work for a single level up, if I understand what you propose.

          Daniel Beck added a comment -

          I was thinking of HTML Publisher, but it looks like that uses rootUrl + job/build.getUrl() to get there, so this shouldn't matter after all.

          Daniel Beck added a comment - I was thinking of HTML Publisher, but it looks like that uses rootUrl + job/build.getUrl() to get there, so this shouldn't matter after all.

          Jesse Glick added a comment -

          rootUrl will not work for backlinks, since it would be /jenkins in this example, thus pointing to something like https://static.dev.example.com/jenkins/job/stuff/123/ which is illegal according to our rules. I suppose that could be made to serve a redirect to https://dev.example.com/jenkins/job/stuff/123/.


          BTW tip: to display example URLs in JIRA while suppressing hyperlinks, use

          {{http:}}{{//server/path}}
          

          Jesse Glick added a comment - rootUrl will not work for backlinks, since it would be /jenkins in this example, thus pointing to something like https: //static.dev.example.com/jenkins/job/stuff/123/ which is illegal according to our rules. I suppose that could be made to serve a redirect to https: //dev.example.com/jenkins/job/stuff/123/ . BTW tip: to display example URLs in JIRA while suppressing hyperlinks, use {{http:}}{{//server/path}}

          Daniel Beck added a comment - - edited

          Unfortunately I don't think it's possible to implement this without API additions, i.e. plugins will not just magically pick this up when it's added in core.

          The problem is that we need to support permission checks as the URL on the second/resource domain is accessed, and cannot just assume that Read access to owner will be enough. An example in core for that is https://github.com/jenkinsci/jenkins/blob/b8e32de403ad40a7641d0b15ff2f1e36cf522ff4/core/src/main/java/hudson/model/AbstractProject.java#L1832 which has an additional permission check.

          Alternatively we're good with that for the lifetime of a session (+ some undetermined delay), but I'd rather not.

          Daniel Beck added a comment - - edited Unfortunately I don't think it's possible to implement this without API additions, i.e. plugins will not just magically pick this up when it's added in core. The problem is that we need to support permission checks as the URL on the second/resource domain is accessed, and cannot just assume that Read access to owner will be enough. An example in core for that is https://github.com/jenkinsci/jenkins/blob/b8e32de403ad40a7641d0b15ff2f1e36cf522ff4/core/src/main/java/hudson/model/AbstractProject.java#L1832 which has an additional permission check. Alternatively we're good with that for the lifetime of a session (+ some undetermined delay), but I'd rather not.

          Jesse Glick added a comment -

          I think it would be acceptable to do the (for example) WORKSPACE check against the current authenticated user once, when the DirectoryBrowserSupport is being created, and then serve a URL prefix which is good for an hour. An administrator might happen to revoke that user’s permission (or delete the user from the security realm) ten minutes later, but so what? If they were going to steal sensitive content, they could have done so already, and if this comes as a surprise and they are being escorted from the building by security they have probably lost the magic link by the time they get out on the sidewalk.

          Anyway, if you feel strongly that we need to define a new abstract API type which saves arbitrary data (in this example I guess a User.id + AbstractProject.fullName) and rechecks permissions on each request, there are only 43 OSS plugins I see creating DirectoryBrowserSupport, so it could be adopted incrementally—most eagerly by plugins which actually record content that is problematic for CSP, or that are widely used (perhaps workflow-support, htmlpublisher, javadoc, maven-plugin, junit-attachments).

          Jesse Glick added a comment - I think it would be acceptable to do the (for example) WORKSPACE check against the current authenticated user once, when the DirectoryBrowserSupport is being created, and then serve a URL prefix which is good for an hour. An administrator might happen to revoke that user’s permission (or delete the user from the security realm) ten minutes later, but so what? If they were going to steal sensitive content, they could have done so already, and if this comes as a surprise and they are being escorted from the building by security they have probably lost the magic link by the time they get out on the sidewalk. Anyway, if you feel strongly that we need to define a new abstract API type which saves arbitrary data (in this example I guess a User.id + AbstractProject.fullName ) and rechecks permissions on each request, there are only 43 OSS plugins I see creating DirectoryBrowserSupport , so it could be adopted incrementally—most eagerly by plugins which actually record content that is problematic for CSP, or that are widely used (perhaps workflow-support , htmlpublisher , javadoc , maven-plugin , junit-attachments ).

          Daniel Beck added a comment -

          So… how bad would it be to allow admins to choose? Here's my proposed help text:

          This option [Name TBD] improves the compatibility with plugins not specifically supporting this feature, at the cost of relaxed security checks.

          When unchecked (the default) plugins need to explicitly register every directory browser instance they intend to make available via a resource URL below the resources root URL. Otherwise it will not redirect requests to files to their corresponding resources URL, but serve files directly and add Content-Security-Policy headers. Registering includes defining security checks to perform on the user identity for which a resource URL is created before access is granted. As a result, when a user loses the permission to access a workspace, job, or other directory browser, the corresponding resource URLs will stop working as well, as the user's permissions are checked every time the resource URL is accessed.

          When checked, all directory browsers are implicitly registered on first access, and only require that the user retains read access to the model object (typically a job) that the directory browser is associated with. If a user loses a more specific, otherwise required permission, such as Item/Workspace or Build/Artifacts, they will still be able to access the files through resource URLs until those URLs expire.

          Daniel Beck added a comment - So… how bad would it be to allow admins to choose? Here's my proposed help text: This option  [Name TBD] improves the compatibility with plugins not specifically supporting this feature, at the cost of relaxed security checks. When unchecked (the default) plugins need to explicitly register every directory browser instance they intend to make available via a resource URL below the resources root URL. Otherwise it will not redirect requests to files to their corresponding resources URL, but serve files directly and add Content-Security-Policy headers. Registering includes defining security checks to perform on the user identity for which a resource URL is created before access is granted. As a result, when a user loses the permission to access a workspace, job, or other directory browser, the corresponding resource URLs will stop working as well, as the user's permissions are checked every time the resource URL is accessed. When checked , all directory browsers are implicitly registered on first access, and only require that the user retains read access to the model object (typically a job) that the directory browser is associated with. If a user loses a more specific, otherwise required permission, such as Item/Workspace or Build/Artifacts , they will still be able to access the files through resource URLs until those URLs expire.

          Daniel Beck added a comment -

          r2b2_nz FYI, you may be interested in this feature for HTML Publisher, so your input would be appreciated.

          Daniel Beck added a comment - r2b2_nz FYI, you may be interested in this feature for HTML Publisher, so your input would be appreciated.

          Jesse Glick added a comment -

          I wonder whether you can save the request URI in effect when the DirectoryBrowserSupport is created, along with the user ID, and then try to virtually navigate back to that path whenever serving a request to see if it is still permitted? The catch is that Stapler does not currently offer an API to evaluate a token sequence from app root on demand. (I have wanted such an API a couple of times in the past.)

          Jesse Glick added a comment - I wonder whether you can save the request URI in effect when the DirectoryBrowserSupport is created, along with the user ID, and then try to virtually navigate back to that path whenever serving a request to see if it is still permitted? The catch is that Stapler does not currently offer an API to evaluate a token sequence from app root on demand. (I have wanted such an API a couple of times in the past.)

          Daniel Beck added a comment -

          try to virtually navigate back to that path

          The only way I can imagine this working (manual Stapler#invoke ) will just end up generating a lot of additional DBS instances, since the request needs to go to the doWhatever that returns the DBS.

          The catch is that Stapler does not currently offer an API to evaluate a token sequence from app root on demand

          I would imagine I could attempt to find the ancestor corresponding to dbs.owner, grab all further ancestors' tokens as of the constructor invocation of DBS, and then call Stapler#invoke(…, …, dbs.owner, furtherTokens) whenever a resource domain URL is accessed? Would save at least the navigation through to dbs.owner.

          Even if that works, there's way too much that can go wrong with this and it is unclear to me how I would handle that…

          Daniel Beck added a comment - try to virtually navigate back to that path The only way I can imagine this working (manual Stapler#invoke ) will just end up generating a lot of additional  DBS instances, since the request needs to go to the doWhatever that returns the DBS . The catch is that Stapler does not currently offer an API to evaluate a token sequence from app root on demand I would imagine I could attempt to find the ancestor corresponding to dbs.owner , grab all further ancestors' tokens as of the constructor invocation of DBS , and then call Stapler#invoke(…, …, dbs.owner, furtherTokens) whenever a resource domain URL is accessed? Would save at least the navigation through to dbs.owner . Even if that works, there's way too much that can go wrong with this and it is unclear to me how I would handle that…

          Daniel Beck added a comment -

          Experimentally the proposal from the previous comment seems to work well enough. Right now I'm doing a "fake" request rooted in the nearest Ancestor that's an AccessControlled + its restOfPath (which will point to a file served by the DBS). I'm still passing the real request and response, so need to prevent the response from being written to by setting a ThreadLocal flag to not write output by DBS. Seems super fragile, and I probably want a dummy HttpServletResponse so the real one cannot be messed up this way.

          Alternatively, I may be able to use this mechanism to write the actual response. That way, I wouldn't even have to store magic DBS instances, just the URL of the DBS, its nearest AccessControlledrestOfPath, and similar metadata. In that case I'll need to make sure to always request the correct file. Right now, it's whatever file was requested for the "main" request that redirected from the regular URL to the resource URL, since I only use it for the permission check.

          Daniel Beck added a comment - Experimentally the proposal from the previous comment seems to work well enough. Right now I'm doing a "fake" request rooted in the nearest  Ancestor that's an AccessControlled + its  restOfPath (which will point to a file served by the DBS ). I'm still passing the real request and response, so need to prevent the response from being written to by setting a  ThreadLocal flag to not write output by DBS . Seems super fragile, and I probably want a dummy  HttpServletResponse so the real one cannot be messed up this way. Alternatively, I may be able to use this mechanism to write the actual response. That way, I wouldn't even have to store magic  DBS instances, just the URL of the DBS, its nearest  AccessControlled /  restOfPath , and similar metadata. In that case I'll need to make sure to always request the correct file. Right now, it's whatever file was requested for the "main" request that redirected from the regular URL to the resource URL, since I only use it for the permission check.

          Daniel Beck added a comment -

          Alternatively, I may be able to use this mechanism to write the actual response.

          This is what I'm currently doing and it seems to work pretty well. It even means we don't need the option discussed above, as we can always perform a "live" permission check an arbitrarily long suffix of the pathInfo. (We start after the last AccessControlled).

          DBS register themselves as soon as generateResponse is called (which seems safer than the constructor, as it's actual "time of use" rather than an object that could still be passed around), if we're not responding to a request on the second/resource domain, with…

          • the full URL (except for the file path within the DBS),
          • the nearest ancestor AccessControlled,
          • the restOfPath,
          • and the Authentication.name

          …stored in a ResourceHolder wrapper object in a per-session map keyed by the URL (again with any file path within the DBS removed). If an equivalent ResourceHolder already exists for the URL key – and we compare object identity of the AccessControlled, it's reused, else a new one is added. Then whatever we got from that is added, if necessary, to a global list from UUID to ResourceHolder (WeakReference to that, actually, so hopefully reaping the sessions will remove obsolete {{ResourceHolder}}s but so far untested) which is what allows the request routing to work.

          If a request arrives at the UnprotectedRootAction, we look up the ResourceHolder corresponding to the UUID, map the actually requested URL (with file path) to the corresponding restOfPath + filePath , and call Stapler#invoke(req, rsp, accessControlled, restOfPathPlusFilePath) as Authorization.name. That just writes the file into the response. The underlying assumption here is that the AccessControlled will implement a StaplerProxy style permission check, or the restOfPath contains enough permission checks – for a job workspace, we'd start routing at the job, go through its getTarget permission check, call doWs (with a more specific permission check) and let that handle the response.

          To make sure requests go to the UnprotectedRootAction, a DBS holds an identifier (the UUID but can be anything) after successful registration. If it comes to serving a single file, the second/resource domain is configured, and we're not on the second/resource domain, we serve an HTTP 302 redirect to the corresponding URL over there. Otherwise, we serve the file directly, with CSP headers.

          A few things left to figure out:

          • The global list grows unbounded with no cleanup. It maps UUID to WeakReference<ResourceHolder – unsure how much of a problem that is.
          • The per-session lists are built the same way (AFAICT) as BoundObjectTable with strong references in a session attribute, but I haven't seen this get cleaned up yet. Time to #doSimulateOutOfMemory and see what happens…
          • Weird URLs like last*Build generate unnecessarily many instances, as we only look at the URL. Not sure I care enough to try to fix this.
          • We apparently may end up holding references to obsolete objects after "Reload Configuration" is called.
          • There may be problems around the renaming of projects, but these may actually be less than on "normal" URLs, as the URL only matters during registration, i.e. when the user accesses the DBS through a regular (non-resource) URL.
          • Can we really rely on the assumption around the permission checks for the nearest ancestor, and if not, do we care enough? Do we need a guarantee around the expiration of URLs here to limit potential problems?

           

          Daniel Beck added a comment - Alternatively, I may be able to use this mechanism to write the actual response. This is what I'm currently doing and it seems to work pretty well. It even means we don't need the option discussed above, as we can always perform a "live" permission check an arbitrarily long suffix of the pathInfo . (We start after the last AccessControlled ). DBS register themselves as soon as generateResponse is called (which seems safer than the constructor, as it's actual "time of use" rather than an object that could still be passed around), if we're not responding to a request on the second/resource domain, with… the full URL (except for the file path within the DBS ), the nearest ancestor AccessControlled , the restOfPath , and the  Authentication.name …stored in a  ResourceHolder wrapper object in a per-session map keyed by the URL (again with any file path within the  DBS removed). If an equivalent ResourceHolder already exists for the URL key – and we compare object identity of the AccessControlled , it's reused, else a new one is added. Then whatever we got from that is added, if necessary, to a global list from UUID to ResourceHolder ( WeakReference to that, actually, so hopefully reaping the sessions will remove obsolete {{ResourceHolder}}s but so far untested) which is what allows the request routing to work. If a request arrives at the UnprotectedRootAction , we look up the ResourceHolder corresponding to the UUID, map the actually requested URL (with file path) to the corresponding restOfPath + filePath , and call  Stapler#invoke(req, rsp, accessControlled, restOfPathPlusFilePath) as Authorization.name . That just writes the file into the response. The underlying assumption here is that the AccessControlled will implement a StaplerProxy style permission check, or the restOfPath contains enough permission checks – for a job workspace, we'd start routing at the job, go through its getTarget permission check, call doWs (with a more specific permission check) and let that handle the response. To make sure requests go to the UnprotectedRootAction , a DBS holds an identifier (the UUID but can be anything) after successful registration. If it comes to serving a single file, the second/resource domain is configured, and we're not on the second/resource domain, we serve an HTTP 302 redirect to the corresponding URL over there. Otherwise, we serve the file directly, with CSP headers. A few things left to figure out: The global list grows unbounded with no cleanup. It maps UUID to WeakReference<ResourceHolder – unsure how much of a problem that is. The per-session lists are built the same way (AFAICT) as BoundObjectTable with strong references in a session attribute, but I haven't seen this get cleaned up yet. Time to #doSimulateOutOfMemory and see what happens… Weird URLs like last*Build generate unnecessarily many instances, as we only look at the URL. Not sure I care enough to try to fix this. We apparently may end up holding references to obsolete objects after "Reload Configuration" is called. There may be problems around the renaming of projects, but these may actually be less than on "normal" URLs, as the URL only matters during registration, i.e. when the user accesses the DBS through a regular (non-resource) URL. Can we really rely on the assumption around the permission checks for the nearest ancestor, and if not, do we care enough? Do we need a guarantee around the expiration of URLs here to limit potential problems?  

          Jesse Glick added a comment -

          Do we need a guarantee around the expiration of URLs here to limit potential problems?

          I would think all table entries should be time-limited.

          Not sure I followed every detail above, particularly the usage of Stapler.invoke, but it sounds right.

          The UnprotectedRootAction is verifying that the Host header is set to the second domain, right?

          Jesse Glick added a comment - Do we need a guarantee around the expiration of URLs here to limit potential problems? I would think all table entries should be time-limited. Not sure I followed every detail above, particularly the usage of Stapler.invoke , but it sounds right. The UnprotectedRootAction is verifying that the Host header is set to the second domain, right?

          Daniel Beck added a comment -

          The UnprotectedRootAction is verifying that the Host header is set to the second domain, right?

          Yes, only responds with 404s on not-the-second-domain (while a filter responds with 404 on everything not accessing the action on the second domain). That works and didn't seem notable.

          Daniel Beck added a comment - The UnprotectedRootAction is verifying that the Host header is set to the second domain, right? Yes, only responds with 404s on not-the-second-domain (while a filter responds with 404 on everything not accessing the action on the second domain). That works and didn't seem notable.

          Daniel Beck added a comment -

          A further alternative would be to replace the UUID identifying a previously stored set of properties with an encrypted value containing the full path to the file, as well as the authentication to use, which is enough to perform the internal request.

          This works experimentally (using Secret) and cuts down the lines of code quite substantially. We lose some of the minor benefits around the fancy AccessControlled use above, but it seems worth it, unless we discover cases where going through the full URL would have unintended side effects. I do not believe they would exist, otherwise the DBS on the regular URL would be a mess in the exact same way.

          Now on to use something other than Secret to not allow anyone in Jenkins to build their own "give me access to these URLs as some other user" resource URLs.

          Additionally, we could encode a timestamp which we could use to expire such URLs after a fixed amount of time.

          Daniel Beck added a comment - A further alternative would be to replace the UUID identifying a previously stored set of properties with an encrypted value containing the full path to the file, as well as the authentication to use, which is enough to perform the internal request. This works experimentally (using  Secret ) and cuts down the lines of code quite substantially. We lose some of the minor benefits around the fancy AccessControlled use above, but it seems worth it, unless we discover cases where going through the full URL would have unintended side effects. I do not believe they would exist, otherwise the DBS on the regular URL would be a mess in the exact same way. Now on to use something other than Secret to not allow anyone in Jenkins to build their own "give me access to these URLs as some other user" resource URLs. Additionally, we could encode a timestamp which we could use to expire such URLs after a fixed amount of time.

          Daniel Beck added a comment -

          OK so the current implementation:

          • Has its own CryptoConfidentialKey with random IV for every URL.
          • Encodes authentication, DBS URL, and creation date in the (now super long) string in the URL (all encrypted)

          On access, it's decrypted, and if the age is below a certain threshold, it's handled, otherwise the user is redirected to the real URL. This creates a short loop through (re)authentication (old resource URL -> regular Jenkins URL (might require auth) -> new resource URL) which seems to work mostly OK – once frames are involved, the Jenkins login screen doesn't like to show in a frame (thanks X-Frame-Options), and it's just an empty page if you're not currently logged in. If you have a session, it's just transparent.

          Still seems superior to just go with 404s all the time, and a full reload will fix it (as the top level page will go through the auth loop without frame ).

          Daniel Beck added a comment - OK so the current implementation: Has its own CryptoConfidentialKey with random IV for every URL. Encodes authentication,  DBS URL, and creation date in the (now super long) string in the URL (all encrypted) On access, it's decrypted, and if the age is below a certain threshold, it's handled, otherwise the user is redirected to the real URL. This creates a short loop through (re)authentication (old resource URL -> regular Jenkins URL (might require auth) -> new resource URL) which seems to work mostly OK – once frames are involved, the Jenkins login screen doesn't like to show in a frame (thanks X-Frame-Options ), and it's just an empty page if you're not currently logged in. If you have a session, it's just transparent. Still seems superior to just go with 404s all the time, and a full reload will fix it (as the top level page will go through the auth loop without frame ).

          Daniel Beck added a comment -

          On second thought, there's no need to encrypt anything here – we don't need to keep the content secret. We just need to confirm it hasn't been tampered with, i.e. users don't get to define their own resource URLs. So what we need is a signature.

          Daniel Beck added a comment - On second thought, there's no need to encrypt anything here – we don't need to keep the content secret. We just need to confirm it hasn't been tampered with, i.e. users don't get to define their own resource URLs. So what we need is a signature.

          Matt Sicker added a comment -

          An HMAC essentially, yes. That sounds fine. These are like super limited use JWTs.

          Matt Sicker added a comment - An HMAC essentially, yes. That sounds fine. These are like super limited use JWTs.

          Daniel Beck added a comment - Plugins affected: https://wiki.jenkins.io/display/JENKINS/Acunetix+Plugin https://wiki.jenkins.io/display/JENKINS/BTC+EmbeddedPlatform https://wiki.jenkins.io/display/JENKINS/HTML+Publisher+Plugin https://wiki.jenkins.io/display/JENKINS/Javadoc+Plugin https://wiki.jenkins.io/display/JENKINS/LoadRunner+Integration https://wiki.jenkins.io/display/JENKINS/Micro+Focus+Application+Automation+Tools https://wiki.jenkins.io/display/JENKINS/NeoLoad+Plugin https://wiki.jenkins.io/display/JENKINS/PRQA+Plugin https://wiki.jenkins.io/display/JENKINS/Redmine+Metrics+Report+Plugin https://wiki.jenkins.io/display/JENKINS/VectorCAST+Execution+Plugin https://wiki.jenkins.io/display/JENKINS/Worksoft+Certify+DashBoard+Plugin https://wiki.jenkins.io/display/JENKINS/Worksoft+Certify+Process+Runner https://wiki.jenkins.io/display/JENKINS/Worksoft+Certify+Process+Suite https://wiki.jenkins.io/display/JENKINS/Worksoft+Certify+RiskBased+PlugIn More in comments on https://wiki.jenkins.io/display/JENKINS/Configuring+Content+Security+Policy

          Matt Sicker added a comment -

          Not sure how relevant it would be, but the Audit Log plugin makes HTML audit logs available via DirectoryBrowserSupport. If I wanted to use more advanced UI pages for that, it would likely need its own CSP.

          Matt Sicker added a comment - Not sure how relevant it would be, but the Audit Log plugin makes HTML audit logs available via DirectoryBrowserSupport. If I wanted to use more advanced UI pages for that, it would likely need its own CSP.

          danielbeck I think resource root url docs could be improved adding simple example for reverse proxy setup using nginx/apache 

          Joseph Petersen (old) added a comment - danielbeck I think resource root url docs could be improved adding simple example for reverse proxy setup using nginx/apache 

          Sagar added a comment -

          Hi All, we are using a performance tool that generated html content based on the jinja templates and we want to publish those inside Jenkins.

          The html content is not displayed properly, I understand that its due to Content Security Policy. When I tried to run some commands like System.setProperty("hudson.model.DirectoryBrowserSupport.CSP", ""), still the html content is not displayed properly. Does it need a jenkins restart ?
          Now here comes the question of resoure root URL. I have these files generated and archived inside the Jenkins workspace. What should be the URL I need to provide ?
          When I open the html in the browser its pointing to the location in which its present but not any particular website.

          Can some one please help ?

          Sagar added a comment - Hi All, we are using a performance tool that generated html content based on the jinja templates and we want to publish those inside Jenkins. The html content is not displayed properly, I understand that its due to Content Security Policy. When I tried to run some commands like System.setProperty("hudson.model.DirectoryBrowserSupport.CSP", ""), still the html content is not displayed properly. Does it need a jenkins restart ? Now here comes the question of resoure root URL. I have these files generated and archived inside the Jenkins workspace. What should be the URL I need to provide ? When I open the html in the browser its pointing to the location in which its present but not any particular website. Can some one please help ?

          Kalle Niemitalo added a comment - - edited

          I don’t think changing the property value needs a Jenkins restart. DirectoryBrowserSupport rereads it every time: https://github.com/jenkinsci/jenkins/blob/f48c5f552f72485658c1c98482b42ae42ed1ee8c/core/src/main/java/hudson/model/DirectoryBrowserSupport.java#L380

          You could use the developer features of the web browser to check whether the HTTP response still has a Content-Security-Policy header and what kind.

          Kalle Niemitalo added a comment - - edited I don’t think changing the property value needs a Jenkins restart. DirectoryBrowserSupport rereads it every time: https://github.com/jenkinsci/jenkins/blob/f48c5f552f72485658c1c98482b42ae42ed1ee8c/core/src/main/java/hudson/model/DirectoryBrowserSupport.java#L380 You could use the developer features of the web browser to check whether the HTTP response still has a Content-Security-Policy header and what kind.

          Sagar added a comment -

          kon Thank for your response. I was also looking at second option of configuring resource root url. But what is the resource root URL. for the html files which were generated and available in Jenkins workspace ?

          To keep it simple, how to generate the resource root URL for static files or html files present in the Jenkins workspace ?

          Sagar added a comment - kon Thank for your response. I was also looking at second option of configuring resource root url. But what is the resource root URL. for the html files which were generated and available in Jenkins workspace ? To keep it simple, how to generate the resource root URL for static files or html files present in the Jenkins workspace ?

          Kalle Niemitalo added a comment - - edited

          I don’t really have experience with the resource root URL setting, but from what I understand, you don’t “generate” it; rather, you register another hostname in DNS, pointing to your Jenkins controller host, and configure that as the resource root in Jenkins. Then when a user tries to access “untrusted” files (such as files in workspaces) with a Web browser, Jenkins redirects to a URL within the resource root and serves the file from there.

          So, you should talk about the resource hostname with the people who maintain your DNS. They might decide that you need to use a separate second-level domain (like GitHub has github.com for its own UI but githubusercontent.com for untrusted files). Jenkins does not mandate such a strict separation and would be happy with a subdomain for the resources, but perhaps your corporate network has some other web servers that need to be protected from potentially malicious scripts in untrusted files that Jenkins serves under the resource root URL.

          Kalle Niemitalo added a comment - - edited I don’t really have experience with the resource root URL setting, but from what I understand, you don’t “generate” it; rather, you register another hostname in DNS, pointing to your Jenkins controller host, and configure that as the resource root in Jenkins. Then when a user tries to access “untrusted” files (such as files in workspaces) with a Web browser, Jenkins redirects to a URL within the resource root and serves the file from there. So, you should talk about the resource hostname with the people who maintain your DNS. They might decide that you need to use a separate second-level domain (like GitHub has github.com for its own UI but githubusercontent.com for untrusted files). Jenkins does not mandate such a strict separation and would be happy with a subdomain for the resources, but perhaps your corporate network has some other web servers that need to be protected from potentially malicious scripts in untrusted files that Jenkins serves under the resource root URL.

          Daniel Beck added a comment -

          This is an issue tracker, please ask development questions on the dev list to a much larger audience.

          Daniel Beck added a comment - This is an issue tracker, please ask development questions on the dev list to a much larger audience.

            danielbeck Daniel Beck
            danielbeck Daniel Beck
            Votes:
            2 Vote for this issue
            Watchers:
            8 Start watching this issue

              Created:
              Updated:
              Resolved: