Status: Closed (View Workflow)
jdk-1.7.0_67 (master and slave)
Custom Linux running 3.10.59 64 bit (master and slave)
We have 19GB of tarballs and isos that are built on a Linux slave. The pulling of artifacts to the master takes 33 minutes over a local network. I modified the Jenkins source to not use compression and reran the job; the job took 9 minutes. This mirrors what happens if you use tar and ssh from the command line with the same machines.
My method of test was to have a job in Jenkins that only archived the artifacts. I pre-populated the workspace with the data that I wanted archived.
To turn of compression I modified the function copyRecursiveTo in FilePath.java (core/src/main/hudson). I change the 4 lines in that function that referenced TarCompression.GZIP to TarCompression.NONE for my test.
I don't suggest just simply changing the TarCompression to NONE but to make it configurable. It should be configurable due to slaves running over the internet, remotely, or in the cloud, etc where compression may be desirable.
As a note if I tar up the same data with compression and pipe it to ssh and untar it on the destination it takes 19 minutes. Doing the same thing without compression it takes 5 minutes.
- is duplicated by
JENKINS-30815 GZip compression of master-slave transfer should be optional
- links to
|Field||Original Value||New Value|
|Priority||Minor [ 4 ]||Major [ 3 ]|
|Workflow||JNJira [ 159996 ]||JNJira + In-Review [ 180210 ]|
This issue is duplicated by
|Labels||archiving artifacts linux performance slave||archiving artifacts gzip linux performance slave|
|Remote Link||This issue links to "PR#4205 (Web Link)" [ 23706 ]|
|Resolution||Fixed [ 1 ]|
|Status||Open [ 1 ]||Closed [ 6 ]|
JENKINS-30815as a duplicate of this one, after carlg pointed it to me. But you can still have a look there for more evidence of how undesirable this compression can be in some cases.