Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-22321

More concise description of timeout options

    XMLWordPrintable

    Details

    • Similar Issues:

      Description

      When trying to copy a rather huge (hundreds of MB) file to a really slow CIFS share, from time to time the following happens:

      CIFS: Connecting from host [Jenkins-4-35]
      CIFS: Connecting with configuration [X] ...
      CIFS: Removing WINS from name resolution
      CIFS: Setting response timeout [30.000]
      CIFS: Setting socket timeout [35.000]
      CIFS: copy [smb://.../my.exe]
      CIFS: Disconnecting configuration [X] ...
      ERROR: Exception when publishing, exception message [Transport1 timedout waiting for response to SmbComNTCreateAndX[command=SMB_COM_NT_CREATE_ANDX,received=false,errorCode=0,flags=0x0018,flags2=0xC803,signSeq=0,tid=2049,pid=59750,uid=2048,mid=5114,wordCount=24,byteCount=195,andxCommand=0xFF,andxOffset=0,flags=0x00,rootDirectoryFid=0,desiredAccess=0x008B,allocationSize=0,extFileAttributes=0x0080,shareAccess=0x0007,createDisposition=0x0005,createOptions=0x00000040,impersonationLevel=0x0002,securityFlags=0x03,name=\X\...\my.exe]]

      Apparently the timeout of 30s is not long enough for the file to be completely transferred. Agreed, but the documentation of the timeout options should clearly say that the timeout is actually measured end-to-end for one file and is not a "server does not react at all" type of timeout. This means, it is not enough that the transfer is "life and running", but the complete file must actually be transmitted completely in that time. This is not clear enough in the documentation of the plugin!

        Attachments

          Activity

          Hide
          mkarg Markus KARG added a comment -

          Another improvement request: Apparently the timeout is measured per complete file. Hence, the maximum timeout is not related directly from the share, but directly from the single file (hence, from the Transfer Set). As a consequence, it would make much sense, to be able to override the timeout value at the Transfer Set. That way, the "typical" timeout can be set at the plugin as usual, and jobs with huge files could override it. Currently this is not possible, so if there is one job with huge files among lots of other jobs with small files, all the small files will also use the huge timeout – which is not very smart as in case of failure of the share, all the small jobs need to wait very long time until they can report it.

          Possibly it would be best to simply use the file size as a factor to some share speed bias value? Hence, calculate the timeout for each file individually by the plugin?

          Show
          mkarg Markus KARG added a comment - Another improvement request: Apparently the timeout is measured per complete file. Hence, the maximum timeout is not related directly from the share, but directly from the single file (hence, from the Transfer Set). As a consequence, it would make much sense, to be able to override the timeout value at the Transfer Set. That way, the "typical" timeout can be set at the plugin as usual, and jobs with huge files could override it. Currently this is not possible, so if there is one job with huge files among lots of other jobs with small files, all the small files will also use the huge timeout – which is not very smart as in case of failure of the share, all the small jobs need to wait very long time until they can report it. Possibly it would be best to simply use the file size as a factor to some share speed bias value? Hence, calculate the timeout for each file individually by the plugin?

            People

            Assignee:
            Unassigned Unassigned
            Reporter:
            mkarg Markus KARG
            Votes:
            1 Vote for this issue
            Watchers:
            1 Start watching this issue

              Dates

              Created:
              Updated: