-
Improvement
-
Resolution: Fixed
-
Major
-
None
-
-
2.379
Task logs - produced by AsyncPeriodicWork / AsyncAperiodicWork under $JENKINS_HOME/log/tasks) - can grow very big. We have seen instance with files that account for up to 10s of GBs.
In most cases the culprit is https://github.com/jenkinsci/jenkins/blob/master/core/src/main/java/hudson/model/WorkspaceCleanupThread.java and most recently https://github.com/jenkinsci/jenkins/blob/master/core/src/main/java/jenkins/model/BackgroundGlobalBuildDiscarder.java.
Although the fact that a file grows in size is usually indicative of an issue that needs to be addressed with that particular task, we think that Jenkins should be defensive and have a size limit. Large instance are also likely to see large task logs especially from tasks that operate on each item and log a record for each of them.
By default, those logs are rotated once a day, but not limited in size. Gigabyte of data can bring you close to your storage limit and cause much more critical problems that just consuming disk space.
Jenkins core should:
- adjust those defaults
- warn users when a specific task is reaching the size limit
- is duplicated by
-
JENKINS-66571 BackgroundGlobalBuildDiscarder logs are bloated on instances with many jobs
-
- Closed
-
- relates to
-
JENKINS-66854 AsyncPeriodicWork produces a lot of disk churn
-
- Closed
-
- links to
[JENKINS-64151] Adjust file size rotation for AsyncPeriodicWork / AsyncAperiodicWork Task Logs
Assignee | New: Félix Belzunce Arcos [ fbelzunc ] |
Status | Original: Open [ 1 ] | New: In Progress [ 3 ] |
Description |
Original:
Task logs - produced by AsyncPeriodicWork / AsyncAperiodicWork under $JENKINS_HOME/log/tasks) - can grow very big. We have seen instance with files that account for up to 10s of GBs. In most cases the culprit is https://github.com/jenkinsci/jenkins/blob/master/core/src/main/java/hudson/model/WorkspaceCleanupThread.java and most recently https://github.com/jenkinsci/jenkins/blob/master/core/src/main/java/jenkins/model/BackgroundGlobalBuildDiscarder.java. Although the fact that a file grows in size is usually indicative of an issue that needs to be addressed with that particular task, we think that Jenkins should be defensive and have a size limit. Large instance are also likely to see large task logs especially from tasks that operate on each item and log a record for each of them. By default, those logs are *rotated once a day, but not limited in size*. Gigabyte of data can bring you close to your storage limit and cause much more critical problems that just consuming disk space. Jenkins core should adjust those defaults. And maybe warn users when a specific task is reaching the size limit. |
New:
Task logs - produced by AsyncPeriodicWork / AsyncAperiodicWork under $JENKINS_HOME/log/tasks) - can grow very big. We have seen instance with files that account for up to 10s of GBs.
In most cases the culprit is https://github.com/jenkinsci/jenkins/blob/master/core/src/main/java/hudson/model/WorkspaceCleanupThread.java and most recently https://github.com/jenkinsci/jenkins/blob/master/core/src/main/java/jenkins/model/BackgroundGlobalBuildDiscarder.java. Although the fact that a file grows in size is usually indicative of an issue that needs to be addressed with that particular task, we think that Jenkins should be defensive and have a size limit. Large instance are also likely to see large task logs especially from tasks that operate on each item and log a record for each of them. By default, those logs are *rotated once a day, but not limited in size*. Gigabyte of data can bring you close to your storage limit and cause much more critical problems that just consuming disk space. Jenkins core should: * adjust those defaults * warn users when a specific task is reaching the size limit |
Link |
New:
This issue is duplicated by |
Link |
New:
This issue relates to |
Status | Original: In Progress [ 3 ] | New: Open [ 1 ] |
Assignee | Original: Félix Belzunce Arcos [ fbelzunc ] |
Assignee | New: Allan BURDAJEWICZ [ allan_burdajewicz ] |
Just had another customer with this problem. Are we working on this?
IMO we just should not log this info by default. I doubt anyone ever looks at these logs, except in rare cases to debug build cleanup.