Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-53901

Using readFile does not handle UTF-8 with BOM files

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Blocker Blocker
    • None
    • Jenkins 2.121.2 and Jenkins 2.81 Pipeline Groovy Plugin 2.54

      I'm extracting xml file (nuspec) from some nuget packages and trying to parse it. In most cases it works fine, but in some the xml was written using UTF-8 with BOM encoding, and then parser gets upset and reports:

      org.xml.sax.SAXParseException; lineNumber: 1; columnNumber: 1; Content is not allowed in prolog.
      

      The way I'm parsing xml is:

      @NonCPS
      def parsePackage(packageName, packageVersion) {
          def packageFullName = "${packageName}.${packageVersion}"
        bat """curl -L https://www.nuget.org/api/v2/package/${packageName}/${packageVersion} -o ${packageFullName}.nupkg"""
        bat """unzip ${packageFullName}.nupkg -d ${packageFullName}"""
      
        def nuspecPath = """${packageFullName}\\${packageName}.nuspec"""
        def nuspecContent = readFile file:nuspecPath
        def nuspecXML = new XmlSlurper( false, false ).parseText(nuspecContent)
        println nuspecXML.metadata.version
        
        def newXml = XmlUtil.serialize(nuspecXML)
        return newXml
      }
      

      It looks like readFile is not supporting UTF-8 with BOM as it is passing leading BOM characters into returned string.

       

      I tried to replicate it directly in groovy doing 

      def xmldata = new File("Newtonsoft.Json.nuspec").text
      def pkg = new XmlSlurper().parseText(xmldata) 
      println pkg.metadata.version.text()
      

      But here the leading BOM characters are not passed into xmldata variable

       

      Attached example nuspec with BOM in it.

       

       

          [JENKINS-53901] Using readFile does not handle UTF-8 with BOM files

          Sam Van Oort added a comment -

          quas This is a known with the Unicode spec and the Java platform implementation of it, not Pipeline. In UTF-8 the BOM is neither needed nor suggested - since the BOM is essentially meaningless in UTF-8, Java transparently passes the BOM through.

          First I'd make sure to add the "encloding: 'UTF-8'" argument to your readFile step to ensure it reads as UTF-8. Then we do postprocessing to correct for nonstandard input.

          Some suggested solutions are available on StackOverflow.

          Personally, I'd do something like this to sanitize your input:

          /** These are UTF-8 BOM characters */
          private static String removeUTF8BOM(String s) {
              return s.replace("\uEFBBBF", "");
          }
          

          (might need to be \u FEFF, try it both ways).

          There's also code snippets out there that do a more efficient approach, which only considers the leading bytes of the String.

          Sam Van Oort added a comment - quas This is a known with the Unicode spec and the Java platform implementation of it, not Pipeline. In UTF-8 the BOM is neither needed nor suggested - since the BOM is essentially meaningless in UTF-8, Java transparently passes the BOM through. First I'd make sure to add the "encloding: 'UTF-8'" argument to your readFile step to ensure it reads as UTF-8. Then we do postprocessing to correct for nonstandard input. Some suggested solutions are available on StackOverflow . Personally, I'd do something like this to sanitize your input: /** These are UTF-8 BOM characters */ private static String removeUTF8BOM( String s) { return s.replace( "\uEFBBBF" , ""); } (might need to be \u FEFF, try it both ways). There's also code snippets out there that do a more efficient approach, which only considers the leading bytes of the String.

          Sam Van Oort added a comment -

          This is due to a known problem with Java's implementation of the UTF-8 spec. Suggested an easy workaround in Pipeline code to solve the issue.

          Sam Van Oort added a comment - This is due to a known problem with Java's implementation of the UTF-8 spec. Suggested an easy workaround in Pipeline code to solve the issue.

          Ok, but if its Java issue, why I could not replicate in locally using Groovy Version: 2.6.0-alpha-1 JVM: 1.8.0_111 Vendor: Oracle Corporation OS: Windows 10

          Jakub Pawlinski added a comment - Ok, but if its Java issue, why I could not replicate in locally using Groovy Version: 2.6.0-alpha-1 JVM: 1.8.0_111 Vendor: Oracle Corporation OS: Windows 10

          Ilguiz Latypov added a comment - - edited

          I guess Sam used vague wording.  It's the files that harbour the UTF-8-encoded BOM mark at the beginning, which is useless because UTF-8's bytewise storage does not depend on the architecture's byte order.

          $ python -c 'u = b"\xEF\xBB\xBF".decode("utf-8"); print "%04X" % (ord(u[0]),)'
          FEFF
          

          Microsoft creates files with these useless but confusing 3 bytes at the beginning of its UTF-8-encoded files.  Now every program that reads such files needs to trim the Unicode BOM character at the beginning of the contents after decoding to Unicode.

          public static CharSequence deBOM(CharSequence s) {
              if (s == null) {
                  return null
              } else if (s.length() == 0) {
                  return s
              } else if (s[0] == '\uFEFF') {
                  return s.drop(1)
              } else {
                  return s
              }
          }
          

          https://stackoverflow.com/questions/5406172/utf-8-without-bom

          Perhaps, newer XMLSlurper performs this santitation.

          Ilguiz Latypov added a comment - - edited I guess Sam used vague wording.  It's the files that harbour the UTF-8-encoded BOM mark at the beginning, which is useless because UTF-8's bytewise storage does not depend on the architecture's byte order. $ python -c 'u = b "\xEF\xBB\xBF" .decode( "utf-8" ); print "%04X" % (ord(u[0]),)' FEFF Microsoft creates files with these useless but confusing 3 bytes at the beginning of its UTF-8-encoded files.  Now every program that reads such files needs to trim the Unicode BOM character at the beginning of the contents after decoding to Unicode. public static CharSequence deBOM(CharSequence s) { if (s == null ) { return null } else if (s.length() == 0) { return s } else if (s[0] == '\uFEFF' ) { return s.drop(1) } else { return s } } https://stackoverflow.com/questions/5406172/utf-8-without-bom Perhaps, newer XMLSlurper performs this santitation.

          Ilguiz Latypov added a comment - - edited

          This appears a post-modern BOM that is supposed to tell the encoding of the file before decoding it.

                    +------------------+----------+
                    | Leading sequence | Encoding |
                    +------------------+----------+
                    | FF FE 00 00      | UTF-32LE |
                    | 00 00 FE FF      | UTF-32BE |
                    | FF FE            | UTF-16LE |
                    | FE FF            | UTF-16BE |
                    | EF BB BF         | UTF-8    |
                    +------------------+----------+

          http://www.rfc-editor.org/rfc/rfc4329.txt

          So readFile needs a mode (or a special value for the encoding parameter) to sense the post-modern BOM and decode the rest of the contents accordingly.

          Ilguiz Latypov added a comment - - edited This appears a post-modern BOM that is supposed to tell the encoding of the file before decoding it. +------------------+----------+ | Leading sequence | Encoding | +------------------+----------+ | FF FE 00 00 | UTF-32LE | | 00 00 FE FF | UTF-32BE | | FF FE | UTF-16LE | | FE FF | UTF-16BE | | EF BB BF | UTF-8 | +------------------+----------+ http://www.rfc-editor.org/rfc/rfc4329.txt So readFile needs a mode (or a special value for the encoding parameter) to sense the post-modern BOM and decode the rest of the contents accordingly.

          same issue with readCSV, possibly all other ways of reading files via jenkins. The issue with readCSV is more severe as I cannot step in between reading the content of the file and the content being processed to Commons CSV structure. Only way to do this is to readFile and parse it manually which makes readCSV (and other functionalities like that) redundant.

          I still don't understand why you claim its not jenkins but java issue while its not replicable even in newer groovy version.

          Jakub Pawlinski added a comment - same issue with readCSV, possibly all other ways of reading files via jenkins. The issue with readCSV is more severe as I cannot step in between reading the content of the file and the content being processed to Commons CSV structure. Only way to do this is to readFile and parse it manually which makes readCSV (and other functionalities like that) redundant. I still don't understand why you claim its not jenkins but java issue while its not replicable even in newer groovy version.

          Jakub Pawlinski added a comment - - edited

          Jakub Pawlinski added a comment - - edited Affected functionalities: readCSV readJSON readManifest readMavenPom readProperties readYaml

            Unassigned Unassigned
            quas Jakub Pawlinski
            Votes:
            1 Vote for this issue
            Watchers:
            5 Start watching this issue

              Created:
              Updated: