Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-24304

Enable/Disable verbose Git command logging in Jenkins build log

    XMLWordPrintable

Details

    Description

      The git plugin currently prints out the git commands that it executes.

      git rev-parse master^{commit} # timeout=10
      

      In some cases, these commands can be numerous and thus distract the user from other pertinent information in the build log.

      It would be very valuable to be able to enable or disable this verbose output via a option in the job configuration.

      Attachments

        Issue Links

          Activity

            scoheb Scott Hebert created issue -
            scoheb Scott Hebert made changes -
            Field Original Value New Value
            Component/s git-client [ 17423 ]
            Component/s git [ 15543 ]
            markewaite Mark Waite added a comment -

            Could you try the log parser plugin as an alternative?

            I think the enhancement request is reasonable, but the log parser plugin may give you that already, without waiting for someone to choose to implement your request

            markewaite Mark Waite added a comment - Could you try the log parser plugin as an alternative? I think the enhancement request is reasonable, but the log parser plugin may give you that already, without waiting for someone to choose to implement your request
            markewaite Mark Waite made changes -
            Link This issue is related to JENKINS-9052 [ JENKINS-9052 ]

            Could you provide log where this git output such terrible that you want disable it?
            Having full log during build process is a good practice.

            integer Kanstantsin Shautsou added a comment - Could you provide log where this git output such terrible that you want disable it? Having full log during build process is a good practice.
            jeremyrampon Jeremy Rampon added a comment -

            In my case, it does a git rev-parse on every single public branches, which prints 100's of lines. I'm not sure why the Git plugin is doing this. In the Git config for this job, a specific tag is specified, so that's all it should need. I'm fine with defaulting to outputting the git command, but these rev-parse commands are really polluting the logs in my case.

            jeremyrampon Jeremy Rampon added a comment - In my case, it does a git rev-parse on every single public branches, which prints 100's of lines. I'm not sure why the Git plugin is doing this. In the Git config for this job, a specific tag is specified, so that's all it should need. I'm fine with defaulting to outputting the git command, but these rev-parse commands are really polluting the logs in my case.
            markewaite Mark Waite added a comment -

            You might check if defining the property org.jenkinsci.plugins.gitclient.GitClient.quietRemoteBranches=true on the java command line which starts the Jenkins server.

            If the line you're trying to avoid is "Seen branch xyzzy", then the conditional in the code seems to only display that if quietRemoteBranches is the default value of false.

            markewaite Mark Waite added a comment - You might check if defining the property org.jenkinsci.plugins.gitclient.GitClient.quietRemoteBranches=true on the java command line which starts the Jenkins server. If the line you're trying to avoid is "Seen branch xyzzy", then the conditional in the code seems to only display that if quietRemoteBranches is the default value of false.
            markewaite Mark Waite made changes -
            Assignee Nicolas De Loof [ ndeloof ]
            campos_ddc Diogo Campos added a comment -

            I don't even have the "Seen branch" line in my output, but I do have hundreds of rev-parse calls.

            campos_ddc Diogo Campos added a comment - I don't even have the "Seen branch" line in my output, but I do have hundreds of rev-parse calls.

            I second that request. Here is a sample of what we see in our CI.

            nicoddemus Bruno Oliveira added a comment - I second that request. Here is a sample of what we see in our CI.
            jeremyrampon Jeremy Rampon added a comment -

            I noticed this only happens when using the Git Additional Behavior "Checkout to specific local branch" in the job. If I remove this option, then all the prints go away. But this is not a suitable workaround for me, as my build scripts depend on the current branch being built (devp vs. prod).

            jeremyrampon Jeremy Rampon added a comment - I noticed this only happens when using the Git Additional Behavior "Checkout to specific local branch" in the job. If I remove this option, then all the prints go away. But this is not a suitable workaround for me, as my build scripts depend on the current branch being built (devp vs. prod).
            markewaite Mark Waite added a comment - - edited

            The output happens whenever the plugin needs to convert a name to a SHA1. That happens in several areas (like Prune stale remote tracking branches)

            markewaite Mark Waite added a comment - - edited The output happens whenever the plugin needs to convert a name to a SHA1. That happens in several areas (like Prune stale remote tracking branches)
            rtyler R. Tyler Croy made changes -
            Workflow JNJira [ 157223 ] JNJira + In-Review [ 179538 ]
            samdeane Sam Deane added a comment - - edited

            This seems to have started happening recently on our server, in a way in which it didn't previously.

             

            All of the log lines are of this form:

             

            09:41:13 > git rev-parse refs/tags/builds/appstore/3.4/15562^{commit} # timeout=10

            and there are tons of them.

             

            Worse still, it's taking minutes to process them all

             

            Has something changed in the git plugin recently that could have caused this? It's not really an option to remove these tags.

            samdeane Sam Deane added a comment - - edited This seems to have started happening recently on our server, in a way in which it didn't previously.   All of the log lines are of this form:   09:41:13 > git rev-parse refs/tags/builds/appstore/3.4/15562^{commit} # timeout=10 and there are tons of them.   Worse still, it's taking minutes to process them all   Has something changed in the git plugin recently that could have caused this? It's not really an option to remove these tags.
            samdeane Sam Deane added a comment -

            To illustrate the impact of this problem, here's the first line like this from a recent job:

            09:33:32 > git rev-parse refs/tags/builds/appstore/3.4.1/15745^{commit} # timeout=10

            and here's the last:

            09:41:38 > git rev-parse refs/tags/issues/closed/5510^{commit} # timeout=10

            Note the timestamps. Our job - which usually takes 20 minutes or so, is now taking an extra 8 minutes...

            samdeane Sam Deane added a comment - To illustrate the impact of this problem, here's the first line like this from a recent job: 09:33:32 > git rev-parse refs/tags/builds/appstore/3.4.1/15745^{commit} # timeout=10 and here's the last: 09:41:38 > git rev-parse refs/tags/issues/closed/5510^{commit} # timeout=10 Note the timestamps. Our job - which usually takes 20 minutes or so, is now taking an extra 8 minutes...
            samdeane Sam Deane added a comment -

            (I realise that the log output and the underlying performance are different issues - both annoying though!)

            samdeane Sam Deane added a comment - (I realise that the log output and the underlying performance are different issues - both annoying though!)
            samdeane Sam Deane added a comment -

            I've just discovered the do not fetch tags option, which may help. Will try turning it on, but presumably I'll also need to manually remove all the local tags from each of the jenkins slaves?

            I still don't understand why this has started happening recently, however. Many of the tags have been there for years.

            samdeane Sam Deane added a comment - I've just discovered the  do not fetch tags option, which may help. Will try turning it on, but presumably I'll also need to manually remove all the local tags from each of the jenkins slaves? I still don't understand why this has started happening recently, however. Many of the tags have been there for years.
            markewaite Mark Waite added a comment -

            samdeane I am not aware of any recent change in the plugin that would affect this behavior. Did you recently upgrade either the git plugin or the git client plugin?

            Did the number of tags (or branches) in your repository increase dramatically recently?

            Did a job setting change recently?

            Did you add a new plugin recently (like the timestamper plugin)?

            markewaite Mark Waite added a comment - samdeane I am not aware of any recent change in the plugin that would affect this behavior. Did you recently upgrade either the git plugin or the git client plugin? Did the number of tags (or branches) in your repository increase dramatically recently? Did a job setting change recently? Did you add a new plugin recently (like the timestamper plugin)?
            samdeane Sam Deane added a comment - - edited

            > Did you recently upgrade either the git plugin or the git client plugin?

            They're upgraded (by me) periodically. I think the last change was 17 days ago. A number of plugins were updated, include the git plugin which went from 3.0.1 to 3.2.0.

            I can't say for certain if that coincides with this problem starting.

            > Did the number of tags (or branches) in your repository increase dramatically recently?

            No. There are 2242 tags, but they're largely historical.

            > Did a job setting change recently?

            No.

            > Did you add a new plugin recently (like the timestamper plugin)?

            No, the timestamper plugin has been installed for a long time.

            samdeane Sam Deane added a comment - - edited > Did you recently upgrade either the git plugin or the git client plugin? They're upgraded (by me) periodically. I think the last change was 17 days ago. A number of plugins were updated, include the git plugin which went from 3.0.1 to 3.2.0. I can't say for certain if that coincides with this problem starting. > Did the number of tags (or branches) in your repository increase dramatically recently? No. There are 2242 tags, but they're largely historical. > Did a job setting change recently? No. > Did you add a new plugin recently (like the timestamper plugin)? No, the timestamper plugin has been installed for a long time.
            samdeane Sam Deane added a comment -

            Just noticed this in one of the logs, reported directly after the last rev-parse entry. Not sure if it's relevant:

             
            07:41:09 JENKINS-19022: warning: possible memory leak due to Git plugin usage; see:
            https://wiki.jenkins-ci.org/display/JENKINS/Remove+Git+Plugin+BuildsByBranch+BuildData

            samdeane Sam Deane added a comment - Just noticed this in one of the logs, reported directly after the last rev-parse entry. Not sure if it's relevant:   07:41:09 JENKINS-19022 : warning: possible memory leak due to Git plugin usage; see: https://wiki.jenkins-ci.org/display/JENKINS/Remove+Git+Plugin+BuildsByBranch+BuildData
            samdeane Sam Deane added a comment - - edited

            The contents of that link is very unclear.

            Is it supposed to be a script to run as a work around? Once? Periodically? From the shell? From a job?

            Is it a report that the script itself is the cause of the "leak"?

            It would be helpful if it was explained clearly, with instructions on what to do if you hit this problem.

            The issue that it then points to in turn seems to suggest that the script should be run, but reading the whole thread of comments (most of which are hidden by default), even then it's far from clear whether it will help or not, or if it's always safe to run it, etc.

             

             

             

            samdeane Sam Deane added a comment - - edited The contents of that link is very unclear. Is it supposed to be a script to run as a work around? Once? Periodically? From the shell? From a job? Is it a report that the script itself is the cause of the "leak"? It would be helpful if it was explained clearly, with instructions on what to do if you hit this problem. The issue that it then points to in turn seems to suggest that the script should be run, but reading the whole thread of comments (most of which are hidden by default), even then it's far from clear whether it will help or not, or if it's always safe to run it, etc.      
            markewaite Mark Waite added a comment -

            Is it supposed to be a script to run as a work around? Once? Periodically? From the shell? From a job?

            It is run as a workaround, whenever you encounter the problem. It can be run periodically if you wish. Another alternative is to limit the amount of history you retain with your jobs.

            Is it a report that the script itself is the cause of the "leak"?

            No, the script is not the cause of the problem. The git plugin is the cause of the problem.

            It would be helpful if it was explained clearly, with instructions on what to do if you hit this problem.

            I'm not sure I understand. Can you edit that wiki page to better describe it? If a user has many history records in a git job, the git plugin incorrectly stores too much information about that history within each of the individual build records. Those bloated build records are then loaded into memory, which slows Jenkins startup and makes the Jenkins process much larger than necessary.

            The issue that it then points to in turn seems to suggest that the script should be run, but reading the whole thread of comments (most of which are hidden by default), even then it's far from clear whether it will help or not, or if it's always safe to run it, etc.

            If you depend on the information in those bloated build records, then the script is not safe to run. Most people do not depend on the information in those bloated build records.

            Another way to avoid the issue is to limit the number of build records you retain for your jobs. The configuration slicing plugin will allow you to modify the job definitions of all jobs in your system to limit the amount of history you keep for the jobs. That then avoids the problem by removing historical build records which include that duplicated information.

            markewaite Mark Waite added a comment - Is it supposed to be a script to run as a work around? Once? Periodically? From the shell? From a job? It is run as a workaround, whenever you encounter the problem. It can be run periodically if you wish. Another alternative is to limit the amount of history you retain with your jobs. Is it a report that the script itself is the cause of the "leak"? No, the script is not the cause of the problem. The git plugin is the cause of the problem. It would be helpful if it was explained clearly, with instructions on what to do if you hit this problem. I'm not sure I understand. Can you edit that wiki page to better describe it? If a user has many history records in a git job, the git plugin incorrectly stores too much information about that history within each of the individual build records. Those bloated build records are then loaded into memory, which slows Jenkins startup and makes the Jenkins process much larger than necessary. The issue that it then points to in turn seems to suggest that the script should be run, but reading the whole thread of comments (most of which are hidden by default), even then it's far from clear whether it will help or not, or if it's always safe to run it, etc. If you depend on the information in those bloated build records, then the script is not safe to run. Most people do not depend on the information in those bloated build records. Another way to avoid the issue is to limit the number of build records you retain for your jobs. The configuration slicing plugin will allow you to modify the job definitions of all jobs in your system to limit the amount of history you keep for the jobs. That then avoids the problem by removing historical build records which include that duplicated information.

            People

              Unassigned Unassigned
              scoheb Scott Hebert
              Votes:
              12 Vote for this issue
              Watchers:
              14 Start watching this issue

              Dates

                Created:
                Updated: