Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-54081

Timestamps missing for agent-based steps in Pipeline Job 2.26

    • Icon: Bug Bug
    • Resolution: Fixed
    • Icon: Minor Minor
    • timestamper-plugin
    • Pipeline Job 2.26, Timestamper 1.8.10, Jenkins 2.138.2 LTS, Windows Server 2016 master and agents
    • 1.9

      After upgrading to Pipeline Job 2.26 earlier today, in the Console Log for pipeline builds, I only appear to have timestamps for the operations performed by the master node (i.e., the initial git checkout), but all remaining build steps that are performed on agent nodes lack timestamps entirely. The job properly recognizes that timestamps were enabled in the first place, so provides the typical "System clock time" vs., "Elapsed time" options on the left, but these only have an effect for the master node-occurring timestamps; timestamps for steps taking place on agent nodes are missing regardless.

          [JENKINS-54081] Timestamps missing for agent-based steps in Pipeline Job 2.26

          Jesse Glick added a comment -

          For anyone interested in trying this, use this build.

          Jesse Glick added a comment - For anyone interested in trying this, use this build .

          Kay Wegner added a comment - - edited

          jglick I hereby confirm that 'timestamper-1.8.11-rc427.9b19c6eab4e0.hpi' worked for us. As side effect it also seems fixing issues reported in JENKINS-49836. Thank You.

          Kay Wegner added a comment - - edited jglick I hereby confirm that 'timestamper-1.8.11-rc427.9b19c6eab4e0.hpi' worked for us. As side effect it also seems fixing issues reported in JENKINS-49836 . Thank You.

          Florian Ramillien added a comment - - edited

          Hi, after upgrading to last Jenkins LTS with all plugin update (pipeline + timestamps) we fall on this missing timestamp + a hudge side effect : Windows builders disconnected and builds are aborted. (maybe linux too, but this project build only on windows)

          After many tests, this is related to log volume and timestamp plugin with pipeline job. We have some compiling task (step bat running python scripts) generating huge log volume (3-5Mo), and we see :

          • Slow down of log display on jenkins UI (maybe build is slow down too, but not sure)
          • Agent is disconnected

          Trying with some other configuration it's working:

          • Same log volume but without timestamp plugin
          • timestamp plugin with less log output
          • Same log volume + Timestamp, but on a "classic" job, not a pipeline job.

          I suspect that all logs for a timestamp are buffered somewhere and cause this issue. Fixing timestamp not handled on each output lines should fix this too.

          Error, we can see in traces are like:
          [Pipeline] End of PipelineERROR: script apparently exited with code 0 but asynchronous notification was lost
          Finished: FAILURE
          Or more random errors like:

          2018-10-23 15:01:30 Cannot contact win-xxxx-xx: java.lang.InterruptedException
          hudson.remoting.RequestAbortedException: java.nio.channels.ClosedChannelException

          We will try your patch and/or durable task configuration option described here. Do you want I open a new bug report for this ?

          Florian Ramillien added a comment - - edited Hi, after upgrading to last Jenkins LTS with all plugin update (pipeline + timestamps) we fall on this missing timestamp + a hudge side effect : Windows builders disconnected and builds are aborted. (maybe linux too, but this project build only on windows) After many tests, this is related to log volume and timestamp plugin with pipeline job. We have some compiling task (step bat running python scripts) generating huge log volume (3-5Mo), and we see : Slow down of log display on jenkins UI (maybe build is slow down too, but not sure) Agent is disconnected Trying with some other configuration it's working: Same log volume but without timestamp plugin timestamp plugin with less log output Same log volume + Timestamp, but on a "classic" job, not a pipeline job. I suspect that all logs for a timestamp are buffered somewhere and cause this issue. Fixing timestamp not handled on each output lines should fix this too. Error, we can see in traces are like: [Pipeline] End of PipelineERROR: script apparently exited with code 0 but asynchronous notification was lost Finished: FAILURE Or more random errors like: 2018-10-23 15:01:30 Cannot contact win-xxxx-xx: java.lang.InterruptedException hudson.remoting.RequestAbortedException: java.nio.channels.ClosedChannelException We will try your patch and/or durable task configuration option described here. Do you want I open a new bug report for this ?

          Florian Ramillien added a comment - - edited

          Our tests results are:

          • Standard JOB + Timestamper 1.8.10 => Build OK in less than 10 mn with expected timestamp on each line
          • Pipeline Job without Timestamper => Build OK in less than 10 mn
          • Pipeline Job + Timestamper RC 1.8.11 => Build Reported Fail in 19 mn with expected timestamp on each line.
          • Pipeline Job + Timestamper RC 1.8.11 + USE_WATCHING=false => Build OK in less than 10 mn with expected timestamp

          I say "Reported fail" because if we look on slave, build is correct and in "durable-xxxx" directory we could see that :

          • job result is OK (jenkins-result.txt)
          • delta time between "jenkins-main.bat" and "jenkins-result.txt" is less than 10mn (expected build time).
          • Log file (jenkins-logs.txt) size for this step is 3.5 Mo

          Looking further on log on Jenkins UI, I could see that pipeline end in the middle of the log and 9 mn later some more logs are displayed (but not the full logs) and job fail:
          **

          2018-10-24 10:18:24 Some CPP compiler outputs
          2018-10-24 10:18:24 [Pipeline] }[Pipeline] // timestamps[Pipeline] }
          2018-10-24 10:27:34 Some CPP compiler outputs
          2018-10-24 10:27:34 [Pipeline] // sshagent Some CPP compiler outputs
          2018-10-24 10:27:34 Some CPP compiler outputs
          ... many outputs lines with time stamp ...
          2018-10-24 10:27:34 Some CPP compiler outputs
          ERROR: script apparently exited with code 0 but asynchronous notification was lost[ 2018-10-24T10:27:34.892Z
          Finished: FAILURE
          

           

          Florian Ramillien added a comment - - edited Our tests results are: Standard JOB + Timestamper 1.8.10 => Build OK in less than 10 mn with expected timestamp on each line Pipeline Job without Timestamper => Build OK in less than 10 mn Pipeline Job + Timestamper RC 1.8.11 => Build Reported Fail in 19 mn with expected timestamp on each line. Pipeline Job + Timestamper RC 1.8.11 + USE_WATCHING=false => Build OK in less than 10 mn with expected timestamp I say "Reported fail" because if we look on slave, build is correct and in "durable-xxxx" directory we could see that : job result is OK (jenkins-result.txt) delta time between "jenkins-main.bat" and "jenkins-result.txt" is less than 10mn (expected build time). Log file (jenkins-logs.txt) size for this step is 3.5 Mo Looking further on log on Jenkins UI, I could see that pipeline end in the middle of the log and 9 mn later some more logs are displayed (but not the full logs) and job fail: ** 2018-10-24 10:18:24 Some CPP compiler outputs 2018-10-24 10:18:24 [Pipeline] }[Pipeline] // timestamps[Pipeline] } 2018-10-24 10:27:34 Some CPP compiler outputs 2018-10-24 10:27:34 [Pipeline] // sshagent Some CPP compiler outputs 2018-10-24 10:27:34 Some CPP compiler outputs ... many outputs lines with time stamp ... 2018-10-24 10:27:34 Some CPP compiler outputs ERROR: script apparently exited with code 0 but asynchronous notification was lost[ 2018-10-24T10:27:34.892Z Finished: FAILURE  

          Jesse Glick added a comment -

          Windows builders disconnected and builds are aborted

          Maybe JENKINS-53888.

          Jesse Glick added a comment - Windows builders disconnected and builds are aborted Maybe JENKINS-53888 .

          The result is the same, yes. But I don't think the cause was the same, in our case connection with agent is working unless some specific combinaison of components:

          • A long build with a simple task (ping during 20mn) is working
          • A shorter build (10mn) with hudge log generated by "bat" step, fail (see previous comment).
          • Removing "timestamps" step from pipeline fix this issue
          • Reverting from "push" to "pull" logs in "durable-task" fix this issue too.

          In our case, something occurs at the end of the job between master and slave, and master never receive the "OK" result from slave (nor the full logs). And after 9mn of inactivity build fail and agent is disconnected. It's always reproduced with hudge logs, but maybe logs size is just a factor triggering another hidden synchro/lock problem.

          For now "USE_WATCHING=false" is set in our Jenkins master and is an acceptable solution for us.

          Florian Ramillien added a comment - The result is the same, yes. But I don't think the cause was the same, in our case connection with agent is working unless some specific combinaison of components: A long build with a simple task (ping during 20mn) is working A shorter build (10mn) with hudge log generated by "bat" step, fail (see previous comment). Removing "timestamps" step from pipeline fix this issue Reverting from "push" to "pull" logs in "durable-task" fix this issue too. In our case, something occurs at the end of the job between master and slave, and master never receive the "OK" result from slave (nor the full logs). And after 9mn of inactivity build fail and agent is disconnected. It's always reproduced with hudge logs, but maybe logs size is just a factor triggering another hidden synchro/lock problem. For now "USE_WATCHING=false" is set in our Jenkins master and is an acceptable solution for us.

          Jesse Glick added a comment -

          framillien please discuss in JENKINS-53888.

          Jesse Glick added a comment - framillien please discuss in JENKINS-53888 .

          Ljubisa Punosevac added a comment - - edited

          Hi, 

          Unfortunately, this problem still prevails on Jenkins ver. 2.190.1 and timestamper version 1.10.
          I tried also older versions of this plugin for which has been written here to work, but again with the same results.
          Also, the following option has been set on Jenkins master as JVM parameter, but again with the same outcome:

          -Dorg.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep.USE_WATCHING=false
          

          All our jobs are pipelines and timestamps appear only in a few places. Most of the time they are gone.

          [Pipeline] configFileProvider
          [2019-10-11T11:22:10.289Z] provisioning config files...
          [2019-10-11T11:22:10.299Z] copy managed file [live-env-configuration.properties] to file:/var/lib/jenkins/workspace/generic-test-junit@tmp/config11258587709499260226tmp
          [Pipeline] {
          [Pipeline] readProperties
          [Pipeline] }
          [2019-10-11T11:22:10.350Z] Deleting 1 temporary files
          [Pipeline] // configFileProvider
          [Pipeline] sh
          [2019-10-11T11:22:09.383Z] Creating folder: /var/lib/jenkins/.m2
          [2019-10-11T11:22:09.388Z] Creating file: /var/lib/jenkins/.m2/settings.xml
          [2019-10-11T11:22:09.395Z] Creating folder: /var/lib/jenkins/workspace/generic-test-junit/.repository
          [2019-10-11T11:22:10.683Z] `s3/ec-test-release-pipeline/191011019/pipeline.properties` -> `pipeline.properties`
          [2019-10-11T11:22:10.683Z] Total: 962 B, Transferred: 962 B, Speed: 37.11 KiB/s
          [Pipeline] script
          [Pipeline] {
          [Pipeline] readProperties
          [Pipeline] }
          [Pipeline] // script
          [Pipeline] lock
          [2019-10-11T11:22:10.789Z] Trying to acquire lock on [Label: s3, Quantity: 1]
          [2019-10-11T11:22:10.789Z] Lock acquired on [Label: s3, Quantity: 1]
          [Pipeline] {
          [Pipeline] configFileProvider
          [2019-10-11T11:22:10.852Z] provisioning config files...
          [2019-10-11T11:22:10.862Z] copy managed file [live-env-configuration.properties] to file:/var/lib/jenkins/workspace/generic-test-junit@tmp/config1796990335769048953tmp
          [Pipeline] {
          [Pipeline] readProperties
          [Pipeline] }
          [2019-10-11T11:22:10.916Z] Deleting 1 temporary files
          [Pipeline] // configFileProvider
          [Pipeline] withCredentials
          [2019-10-11T11:22:10.958Z] Masking supported pattern matches of $S3_USERNAME or $S3_PASSWORD
          [Pipeline] {
          [Pipeline] sh
          Added `s3` successfully.
          [Pipeline] }
          [Pipeline] // withCredentials
          [Pipeline] withEnv
          [Pipeline] {
          [Pipeline] withEnv
          [Pipeline] {
          [Pipeline] fileExists
          [Pipeline] dir
          Running in /var/lib/jenkins/ramdisk
          [Pipeline] {
          [Pipeline] configFileProvider
          provisioning config files...
          copy managed file [live-env-configuration.properties] to file:/var/lib/jenkins/ramdisk@tmp/config2573695607279314921tmp
          [Pipeline] {
          [Pipeline] readProperties
          [Pipeline] }
          Deleting 1 temporary files
          [Pipeline] // configFileProvider
          

          Is there a solution to this problem?

          Best,
          Ljubisa.
           

          Ljubisa Punosevac added a comment - - edited Hi,  Unfortunately, this problem still prevails on Jenkins ver. 2.190.1 and timestamper version 1.10 . I tried also older versions of this plugin for which has been written here to work, but again with the same results. Also, the following option has been set on Jenkins master as JVM parameter, but again with the same outcome: -Dorg.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep.USE_WATCHING= false All our jobs are pipelines and timestamps appear only in a few places. Most of the time they are gone. [Pipeline] configFileProvider [2019-10-11T11:22:10.289Z] provisioning config files... [2019-10-11T11:22:10.299Z] copy managed file [live-env-configuration.properties] to file:/ var /lib/jenkins/workspace/ generic -test-junit@tmp/config11258587709499260226tmp [Pipeline] { [Pipeline] readProperties [Pipeline] } [2019-10-11T11:22:10.350Z] Deleting 1 temporary files [Pipeline] // configFileProvider [Pipeline] sh [2019-10-11T11:22:09.383Z] Creating folder: / var /lib/jenkins/.m2 [2019-10-11T11:22:09.388Z] Creating file: / var /lib/jenkins/.m2/settings.xml [2019-10-11T11:22:09.395Z] Creating folder: / var /lib/jenkins/workspace/ generic -test-junit/.repository [2019-10-11T11:22:10.683Z] `s3/ec-test-release-pipeline/191011019/pipeline.properties` -> `pipeline.properties` [2019-10-11T11:22:10.683Z] Total: 962 B, Transferred: 962 B, Speed: 37.11 KiB/s [Pipeline] script [Pipeline] { [Pipeline] readProperties [Pipeline] } [Pipeline] // script [Pipeline] lock [2019-10-11T11:22:10.789Z] Trying to acquire lock on [Label: s3, Quantity: 1] [2019-10-11T11:22:10.789Z] Lock acquired on [Label: s3, Quantity: 1] [Pipeline] { [Pipeline] configFileProvider [2019-10-11T11:22:10.852Z] provisioning config files... [2019-10-11T11:22:10.862Z] copy managed file [live-env-configuration.properties] to file:/ var /lib/jenkins/workspace/ generic -test-junit@tmp/config1796990335769048953tmp [Pipeline] { [Pipeline] readProperties [Pipeline] } [2019-10-11T11:22:10.916Z] Deleting 1 temporary files [Pipeline] // configFileProvider [Pipeline] withCredentials [2019-10-11T11:22:10.958Z] Masking supported pattern matches of $S3_USERNAME or $S3_PASSWORD [Pipeline] { [Pipeline] sh Added `s3` successfully. [Pipeline] } [Pipeline] // withCredentials [Pipeline] withEnv [Pipeline] { [Pipeline] withEnv [Pipeline] { [Pipeline] fileExists [Pipeline] dir Running in / var /lib/jenkins/ramdisk [Pipeline] { [Pipeline] configFileProvider provisioning config files... copy managed file [live-env-configuration.properties] to file:/ var /lib/jenkins/ramdisk@tmp/config2573695607279314921tmp [Pipeline] { [Pipeline] readProperties [Pipeline] } Deleting 1 temporary files [Pipeline] // configFileProvider Is there a solution to this problem? Best, Ljubisa.  

          Jesse Glick added a comment -

          ljubisap it is hard to know offhand what your issue is. If you are consistently missing timestamps, it would be better to file a fresh bug with complete, self-contained steps to reproduce your problem from scratch, and link it to this one.

          Jesse Glick added a comment - ljubisap it is hard to know offhand what your issue is. If you are consistently missing timestamps, it would be better to file a fresh bug with complete, self-contained steps to reproduce your problem from scratch , and link it to this one.

          jglickCreated new ticket JENKINS-59788 with simple dummy pipeline how to reproduce it.

          Ljubisa Punosevac added a comment - jglick Created new ticket JENKINS-59788 with simple dummy pipeline how to reproduce it.

            jglick Jesse Glick
            medianick Nick Jones
            Votes:
            9 Vote for this issue
            Watchers:
            18 Start watching this issue

              Created:
              Updated:
              Resolved: