Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-10686

Polling on slaves can hang

    XMLWordPrintable

    Details

    • Type: Bug
    • Status: Resolved (View Workflow)
    • Priority: Major
    • Resolution: Not A Defect
    • Component/s: p4-plugin
    • Labels:
      None
    • Similar Issues:

      Description

      Using Perforce plugin 1.3.0

      Sometimes, when perforce polling happens on a slave, it hangs and never finds changes.

      The job's polling log shows only this:

      Started on Aug 10, 2011 10:30:57 AM
      Looking for changes...
      Using node: <snip>
      Using remote perforce client: <snip>--1607756523

      The slave-in-question's thread dump is this, I don't see anything related to SCM polling:

      Channel reader thread: channel

      "Channel reader thread: channel" Id=9 Group=main RUNNABLE (in native)
      at java.io.FileInputStream.readBytes(Native Method)
      at java.io.FileInputStream.read(FileInputStream.java:199)
      at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
      at java.io.BufferedInputStream.read(BufferedInputStream.java:237)

      • locked java.io.BufferedInputStream@1262043
        at java.io.ObjectInputStream$PeekInputStream.peek(ObjectInputStream.java:2248)
        at java.io.ObjectInputStream$BlockDataInputStream.peek(ObjectInputStream.java:2541)
        at java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:2551)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1296)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:350)
        at hudson.remoting.Channel$ReaderThread.run(Channel.java:1008)

      main

      "main" Id=1 Group=main WAITING on hudson.remoting.Channel@10f6d3
      at java.lang.Object.wait(Native Method)

      • waiting on hudson.remoting.Channel@10f6d3
        at java.lang.Object.wait(Object.java:485)
        at hudson.remoting.Channel.join(Channel.java:758)
        at hudson.remoting.Launcher.main(Launcher.java:418)
        at hudson.remoting.Launcher.runWithStdinStdout(Launcher.java:364)
        at hudson.remoting.Launcher.run(Launcher.java:204)
        at hudson.remoting.Launcher.main(Launcher.java:166)

      Ping thread for channel hudson.remoting.Channel@10f6d3:channel

      "Ping thread for channel hudson.remoting.Channel@10f6d3:channel" Id=10 Group=main TIMED_WAITING
      at java.lang.Thread.sleep(Native Method)
      at hudson.remoting.PingThread.run(PingThread.java:86)

      Pipe writer thread: channel

      "Pipe writer thread: channel" Id=13 Group=main WAITING on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@12f4818
      at sun.misc.Unsafe.park(Native Method)

      • waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@12f4818
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1987)
        at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:399)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
        at java.lang.Thread.run(Thread.java:662)

      pool-1-thread-545

      "pool-1-thread-545" Id=708 Group=main RUNNABLE
      at sun.management.ThreadImpl.dumpThreads0(Native Method)
      at sun.management.ThreadImpl.dumpAllThreads(ThreadImpl.java:374)
      at hudson.Functions.getThreadInfos(Functions.java:817)
      at hudson.util.RemotingDiagnostics$GetThreadDump.call(RemotingDiagnostics.java:93)
      at hudson.util.RemotingDiagnostics$GetThreadDump.call(RemotingDiagnostics.java:89)
      at hudson.remoting.UserRequest.perform(UserRequest.java:118)
      at hudson.remoting.UserRequest.perform(UserRequest.java:48)
      at hudson.remoting.Request$2.run(Request.java:270)
      at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
      at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
      at java.util.concurrent.FutureTask.run(FutureTask.java:138)
      at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
      at java.lang.Thread.run(Thread.java:662)

      Number of locked synchronizers = 1

      • java.util.concurrent.locks.ReentrantLock$NonfairSync@1e30132

      Finalizer

      "Finalizer" Id=3 Group=system WAITING on java.lang.ref.ReferenceQueue$Lock@103333
      at java.lang.Object.wait(Native Method)

      • waiting on java.lang.ref.ReferenceQueue$Lock@103333
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:134)
        at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:159)

      Reference Handler

      "Reference Handler" Id=2 Group=system WAITING on java.lang.ref.Reference$Lock@191659c
      at java.lang.Object.wait(Native Method)

      • waiting on java.lang.ref.Reference$Lock@191659c
        at java.lang.Object.wait(Object.java:485)
        at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:116)

      Signal Dispatcher

      "Signal Dispatcher" Id=4 Group=system RUNNABLE

      The master's thread dump has a thread for this hung job showing the polling attempt:

      SCM polling for hudson.model.FreeStyleProject@11fa600[<snip>]

      "SCM polling for hudson.model.FreeStyleProject@11fa600[<snip>]" Id=2213 Group=main TIMED_WAITING on [B@1dbe391
      at java.lang.Object.wait(Native Method)

      • waiting on [B@1dbe391
        at hudson.remoting.FastPipedInputStream.read(FastPipedInputStream.java:173)
        at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:264)
        at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:306)
        at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:158)
      • locked java.io.InputStreamReader@a4139e
        at java.io.InputStreamReader.read(InputStreamReader.java:167)
        at java.io.BufferedReader.fill(BufferedReader.java:136)
        at java.io.BufferedReader.readLine(BufferedReader.java:299)
      • locked java.io.InputStreamReader@a4139e
        at java.io.BufferedReader.readLine(BufferedReader.java:362)
        at com.tek42.perforce.parse.AbstractPerforceTemplate.getPerforceResponse(AbstractPerforceTemplate.java:330)
        at com.tek42.perforce.parse.AbstractPerforceTemplate.getPerforceResponse(AbstractPerforceTemplate.java:292)
        at com.tek42.perforce.parse.Workspaces.getWorkspace(Workspaces.java:54)
        at hudson.plugins.perforce.PerforceSCM.getPerforceWorkspace(PerforceSCM.java:1144)
        at hudson.plugins.perforce.PerforceSCM.compareRemoteRevisionWith(PerforceSCM.java:840)
        at hudson.scm.SCM._compareRemoteRevisionWith(SCM.java:354)
        at hudson.scm.SCM.poll(SCM.java:371)
        at hudson.model.AbstractProject.poll(AbstractProject.java:1305)
        at hudson.triggers.SCMTrigger$Runner.runPolling(SCMTrigger.java:420)
        at hudson.triggers.SCMTrigger$Runner.run(SCMTrigger.java:449)
        at hudson.util.SequentialExecutionQueue$QueueEntry.run(SequentialExecutionQueue.java:118)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)

      Number of locked synchronizers = 1

      • java.util.concurrent.locks.ReentrantLock$NonfairSync@3ed79

      As expected, going to Hudson > Manage shows this message

      There are more SCM polling activities scheduled than handled, so the threads are not keeping up with the demands. Check if your polling is hanging, and/or increase the number of threads if necessary.

      Many people had the same issue with the Subversion plugin, and they delivered a feature to the Subversion plugin to allow only polling on the master.
      https://issues.jenkins-ci.org/browse/JENKINS-5413

      We should probably have the same thing in perforce plugin, an option to only poll on the master.

        Attachments

          Issue Links

            Activity

            brianharris brianharris created issue -
            brianharris brianharris made changes -
            Field Original Value New Value
            Description Using Perforce plugin 1.3.0

            Sometimes, when perforce polling happens on a slave, it hangs and never finds changes.


            The job's polling log shows only this:
            {quote}
            Started on Aug 10, 2011 10:30:57 AM
            Looking for changes...
            Using node: <snip>
            Using remote perforce client: <snip>--1607756523
            {quote}


            The slave-in-question's thread dump is this, I don't see anything related to SCM polling:
            {quote}
            Channel reader thread: channel

            "Channel reader thread: channel" Id=9 Group=main RUNNABLE (in native)
            at java.io.FileInputStream.readBytes(Native Method)
            at java.io.FileInputStream.read(FileInputStream.java:199)
            at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
            at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
            - locked java.io.BufferedInputStream@1262043
            at java.io.ObjectInputStream$PeekInputStream.peek(ObjectInputStream.java:2248)
            at java.io.ObjectInputStream$BlockDataInputStream.peek(ObjectInputStream.java:2541)
            at java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:2551)
            at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1296)
            at java.io.ObjectInputStream.readObject(ObjectInputStream.java:350)
            at hudson.remoting.Channel$ReaderThread.run(Channel.java:1008)


            main

            "main" Id=1 Group=main WAITING on hudson.remoting.Channel@10f6d3
            at java.lang.Object.wait(Native Method)
            - waiting on hudson.remoting.Channel@10f6d3
            at java.lang.Object.wait(Object.java:485)
            at hudson.remoting.Channel.join(Channel.java:758)
            at hudson.remoting.Launcher.main(Launcher.java:418)
            at hudson.remoting.Launcher.runWithStdinStdout(Launcher.java:364)
            at hudson.remoting.Launcher.run(Launcher.java:204)
            at hudson.remoting.Launcher.main(Launcher.java:166)


            Ping thread for channel hudson.remoting.Channel@10f6d3:channel

            "Ping thread for channel hudson.remoting.Channel@10f6d3:channel" Id=10 Group=main TIMED_WAITING
            at java.lang.Thread.sleep(Native Method)
            at hudson.remoting.PingThread.run(PingThread.java:86)


            Pipe writer thread: channel

            "Pipe writer thread: channel" Id=13 Group=main WAITING on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@12f4818
            at sun.misc.Unsafe.park(Native Method)
            - waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@12f4818
            at java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
            at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1987)
            at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:399)
            at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
            at java.lang.Thread.run(Thread.java:662)


            pool-1-thread-545

            "pool-1-thread-545" Id=708 Group=main RUNNABLE
            at sun.management.ThreadImpl.dumpThreads0(Native Method)
            at sun.management.ThreadImpl.dumpAllThreads(ThreadImpl.java:374)
            at hudson.Functions.getThreadInfos(Functions.java:817)
            at hudson.util.RemotingDiagnostics$GetThreadDump.call(RemotingDiagnostics.java:93)
            at hudson.util.RemotingDiagnostics$GetThreadDump.call(RemotingDiagnostics.java:89)
            at hudson.remoting.UserRequest.perform(UserRequest.java:118)
            at hudson.remoting.UserRequest.perform(UserRequest.java:48)
            at hudson.remoting.Request$2.run(Request.java:270)
            at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
            at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
            at java.util.concurrent.FutureTask.run(FutureTask.java:138)
            at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
            at java.lang.Thread.run(Thread.java:662)

            Number of locked synchronizers = 1
            - java.util.concurrent.locks.ReentrantLock$NonfairSync@1e30132


            Finalizer

            "Finalizer" Id=3 Group=system WAITING on java.lang.ref.ReferenceQueue$Lock@103333
            at java.lang.Object.wait(Native Method)
            - waiting on java.lang.ref.ReferenceQueue$Lock@103333
            at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118)
            at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:134)
            at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:159)


            Reference Handler

            "Reference Handler" Id=2 Group=system WAITING on java.lang.ref.Reference$Lock@191659c
            at java.lang.Object.wait(Native Method)
            - waiting on java.lang.ref.Reference$Lock@191659c
            at java.lang.Object.wait(Object.java:485)
            at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:116)


            Signal Dispatcher

            "Signal Dispatcher" Id=4 Group=system RUNNABLE
            {quote}

            The master's thread dump has a thread for this hung job showing the polling attempt:
            {quote}
            SCM polling for hudson.model.FreeStyleProject@11fa600[<snip>]

            "SCM polling for hudson.model.FreeStyleProject@11fa600[<snip>]" Id=2213 Group=main TIMED_WAITING on [B@1dbe391
            at java.lang.Object.wait(Native Method)
            - waiting on [B@1dbe391
            at hudson.remoting.FastPipedInputStream.read(FastPipedInputStream.java:173)
            at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:264)
            at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:306)
            at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:158)
            - locked java.io.InputStreamReader@a4139e
            at java.io.InputStreamReader.read(InputStreamReader.java:167)
            at java.io.BufferedReader.fill(BufferedReader.java:136)
            at java.io.BufferedReader.readLine(BufferedReader.java:299)
            - locked java.io.InputStreamReader@a4139e
            at java.io.BufferedReader.readLine(BufferedReader.java:362)
            at com.tek42.perforce.parse.AbstractPerforceTemplate.getPerforceResponse(AbstractPerforceTemplate.java:330)
            at com.tek42.perforce.parse.AbstractPerforceTemplate.getPerforceResponse(AbstractPerforceTemplate.java:292)
            at com.tek42.perforce.parse.Workspaces.getWorkspace(Workspaces.java:54)
            at hudson.plugins.perforce.PerforceSCM.getPerforceWorkspace(PerforceSCM.java:1144)
            at hudson.plugins.perforce.PerforceSCM.compareRemoteRevisionWith(PerforceSCM.java:840)
            at hudson.scm.SCM._compareRemoteRevisionWith(SCM.java:354)
            at hudson.scm.SCM.poll(SCM.java:371)
            at hudson.model.AbstractProject.poll(AbstractProject.java:1305)
            at hudson.triggers.SCMTrigger$Runner.runPolling(SCMTrigger.java:420)
            at hudson.triggers.SCMTrigger$Runner.run(SCMTrigger.java:449)
            at hudson.util.SequentialExecutionQueue$QueueEntry.run(SequentialExecutionQueue.java:118)
            at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
            at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
            at java.util.concurrent.FutureTask.run(FutureTask.java:138)
            at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
            at java.lang.Thread.run(Thread.java:662)

            Number of locked synchronizers = 1
            - java.util.concurrent.locks.ReentrantLock$NonfairSync@3ed79


            {quote}

            As expected, going to Hudson > Manage shows this message

            {quote}
            There are more SCM polling activities scheduled than handled, so the threads are not keeping up with the demands. Check if your polling is hanging, and/or increase the number of threads if necessary.
            {quote}

            Many people had the same issue with the Subversion plugin, and they delivered a feature to the Subversion plugin to allow only polling on the master.
            https://issues.jenkins-ci.org/browse/JENKINS-5413

            We should probably have the same thing in perforce plugin, an option to only poll on the master.
            Using Perforce plugin 1.3.0

            Sometimes, when perforce polling happens on a slave, it hangs and never finds changes.


            The job's polling log shows only this:
            {quote}
            Started on Aug 10, 2011 10:30:57 AM
            Looking for changes...
            Using node: <snip>
            Using remote perforce client: <snip>--1607756523
            {quote}




            The slave-in-question's thread dump is this, I don't see anything related to SCM polling:
            {quote}
            Channel reader thread: channel

            "Channel reader thread: channel" Id=9 Group=main RUNNABLE (in native)
            at java.io.FileInputStream.readBytes(Native Method)
            at java.io.FileInputStream.read(FileInputStream.java:199)
            at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
            at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
            - locked java.io.BufferedInputStream@1262043
            at java.io.ObjectInputStream$PeekInputStream.peek(ObjectInputStream.java:2248)
            at java.io.ObjectInputStream$BlockDataInputStream.peek(ObjectInputStream.java:2541)
            at java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:2551)
            at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1296)
            at java.io.ObjectInputStream.readObject(ObjectInputStream.java:350)
            at hudson.remoting.Channel$ReaderThread.run(Channel.java:1008)


            main

            "main" Id=1 Group=main WAITING on hudson.remoting.Channel@10f6d3
            at java.lang.Object.wait(Native Method)
            - waiting on hudson.remoting.Channel@10f6d3
            at java.lang.Object.wait(Object.java:485)
            at hudson.remoting.Channel.join(Channel.java:758)
            at hudson.remoting.Launcher.main(Launcher.java:418)
            at hudson.remoting.Launcher.runWithStdinStdout(Launcher.java:364)
            at hudson.remoting.Launcher.run(Launcher.java:204)
            at hudson.remoting.Launcher.main(Launcher.java:166)


            Ping thread for channel hudson.remoting.Channel@10f6d3:channel

            "Ping thread for channel hudson.remoting.Channel@10f6d3:channel" Id=10 Group=main TIMED_WAITING
            at java.lang.Thread.sleep(Native Method)
            at hudson.remoting.PingThread.run(PingThread.java:86)


            Pipe writer thread: channel

            "Pipe writer thread: channel" Id=13 Group=main WAITING on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@12f4818
            at sun.misc.Unsafe.park(Native Method)
            - waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@12f4818
            at java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
            at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1987)
            at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:399)
            at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
            at java.lang.Thread.run(Thread.java:662)


            pool-1-thread-545

            "pool-1-thread-545" Id=708 Group=main RUNNABLE
            at sun.management.ThreadImpl.dumpThreads0(Native Method)
            at sun.management.ThreadImpl.dumpAllThreads(ThreadImpl.java:374)
            at hudson.Functions.getThreadInfos(Functions.java:817)
            at hudson.util.RemotingDiagnostics$GetThreadDump.call(RemotingDiagnostics.java:93)
            at hudson.util.RemotingDiagnostics$GetThreadDump.call(RemotingDiagnostics.java:89)
            at hudson.remoting.UserRequest.perform(UserRequest.java:118)
            at hudson.remoting.UserRequest.perform(UserRequest.java:48)
            at hudson.remoting.Request$2.run(Request.java:270)
            at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
            at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
            at java.util.concurrent.FutureTask.run(FutureTask.java:138)
            at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
            at java.lang.Thread.run(Thread.java:662)

            Number of locked synchronizers = 1
            - java.util.concurrent.locks.ReentrantLock$NonfairSync@1e30132


            Finalizer

            "Finalizer" Id=3 Group=system WAITING on java.lang.ref.ReferenceQueue$Lock@103333
            at java.lang.Object.wait(Native Method)
            - waiting on java.lang.ref.ReferenceQueue$Lock@103333
            at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118)
            at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:134)
            at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:159)


            Reference Handler

            "Reference Handler" Id=2 Group=system WAITING on java.lang.ref.Reference$Lock@191659c
            at java.lang.Object.wait(Native Method)
            - waiting on java.lang.ref.Reference$Lock@191659c
            at java.lang.Object.wait(Object.java:485)
            at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:116)


            Signal Dispatcher

            "Signal Dispatcher" Id=4 Group=system RUNNABLE
            {quote}








            The master's thread dump has a thread for this hung job showing the polling attempt:
            {quote}
            SCM polling for hudson.model.FreeStyleProject@11fa600[<snip>]

            "SCM polling for hudson.model.FreeStyleProject@11fa600[<snip>]" Id=2213 Group=main TIMED_WAITING on [B@1dbe391
            at java.lang.Object.wait(Native Method)
            - waiting on [B@1dbe391
            at hudson.remoting.FastPipedInputStream.read(FastPipedInputStream.java:173)
            at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:264)
            at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:306)
            at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:158)
            - locked java.io.InputStreamReader@a4139e
            at java.io.InputStreamReader.read(InputStreamReader.java:167)
            at java.io.BufferedReader.fill(BufferedReader.java:136)
            at java.io.BufferedReader.readLine(BufferedReader.java:299)
            - locked java.io.InputStreamReader@a4139e
            at java.io.BufferedReader.readLine(BufferedReader.java:362)
            at com.tek42.perforce.parse.AbstractPerforceTemplate.getPerforceResponse(AbstractPerforceTemplate.java:330)
            at com.tek42.perforce.parse.AbstractPerforceTemplate.getPerforceResponse(AbstractPerforceTemplate.java:292)
            at com.tek42.perforce.parse.Workspaces.getWorkspace(Workspaces.java:54)
            at hudson.plugins.perforce.PerforceSCM.getPerforceWorkspace(PerforceSCM.java:1144)
            at hudson.plugins.perforce.PerforceSCM.compareRemoteRevisionWith(PerforceSCM.java:840)
            at hudson.scm.SCM._compareRemoteRevisionWith(SCM.java:354)
            at hudson.scm.SCM.poll(SCM.java:371)
            at hudson.model.AbstractProject.poll(AbstractProject.java:1305)
            at hudson.triggers.SCMTrigger$Runner.runPolling(SCMTrigger.java:420)
            at hudson.triggers.SCMTrigger$Runner.run(SCMTrigger.java:449)
            at hudson.util.SequentialExecutionQueue$QueueEntry.run(SequentialExecutionQueue.java:118)
            at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
            at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
            at java.util.concurrent.FutureTask.run(FutureTask.java:138)
            at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
            at java.lang.Thread.run(Thread.java:662)

            Number of locked synchronizers = 1
            - java.util.concurrent.locks.ReentrantLock$NonfairSync@3ed79


            {quote}




            As expected, going to Hudson > Manage shows this message

            {quote}
            There are more SCM polling activities scheduled than handled, so the threads are not keeping up with the demands. Check if your polling is hanging, and/or increase the number of threads if necessary.
            {quote}



            Many people had the same issue with the Subversion plugin, and they delivered a feature to the Subversion plugin to allow only polling on the master.
            https://issues.jenkins-ci.org/browse/JENKINS-5413

            We should probably have the same thing in perforce plugin, an option to only poll on the master.
            rpetti Rob Petti made changes -
            Link This issue is duplicated by JENKINS-10687 [ JENKINS-10687 ]
            Hide
            rpetti Rob Petti added a comment -

            This is likely caused by short-lived, on-demand slaves, which means there's not much we can do to fix it. The workaround is to use the already existing "Poll Only on Master" option.

            Show
            rpetti Rob Petti added a comment - This is likely caused by short-lived, on-demand slaves, which means there's not much we can do to fix it. The workaround is to use the already existing "Poll Only on Master" option.
            rpetti Rob Petti made changes -
            Resolution Not A Defect [ 7 ]
            Status Open [ 1 ] Resolved [ 5 ]
            rpetti Rob Petti made changes -
            Link This issue duplicates JENKINS-9067 [ JENKINS-9067 ]
            Hide
            frozen_man Brian Smith added a comment -

            I am seeing this exact error and my slaves are always on... is there another explanation? I would have thought using the slaves to poll SCM would add more resources to the pool and make things run better? It would seem that running everything from the master would tend to bog it down although that is based on an incomplete understanding of how things work under the hood on the master.

            Show
            frozen_man Brian Smith added a comment - I am seeing this exact error and my slaves are always on... is there another explanation? I would have thought using the slaves to poll SCM would add more resources to the pool and make things run better? It would seem that running everything from the master would tend to bog it down although that is based on an incomplete understanding of how things work under the hood on the master.
            Hide
            rpetti Rob Petti added a comment -

            Poor slave connectivity can also cause this. You should poll from the master whenever you can in order to eliminate network congestion.

            That being said, please make sure you are fully up to date. There have been a lot of enhancements since this ticket was closed that should improve things substantially.

            Show
            rpetti Rob Petti added a comment - Poor slave connectivity can also cause this. You should poll from the master whenever you can in order to eliminate network congestion. That being said, please make sure you are fully up to date. There have been a lot of enhancements since this ticket was closed that should improve things substantially.
            Hide
            frozen_man Brian Smith added a comment -

            OK. I will change the jobs to poll from the master.

            We are currently using the 1.509.1 LTS version and this issue occurs once every 3-4 weeks or so. Maybe the error message should be changed and/or the option to use the slaves to do the polling should be removed since that doesn't seem to be a valid option any more?

            Show
            frozen_man Brian Smith added a comment - OK. I will change the jobs to poll from the master. We are currently using the 1.509.1 LTS version and this issue occurs once every 3-4 weeks or so. Maybe the error message should be changed and/or the option to use the slaves to do the polling should be removed since that doesn't seem to be a valid option any more?
            Hide
            rpetti Rob Petti added a comment -

            Which perforce plugin version are you using? That's the more important factor here, not the Jenkins version.

            Using slaves for polling is a valid option provided you have a solid connection to them, and they are not transient in nature.

            Which error message are you referring to? There are no error messages included in this ticket description.

            Show
            rpetti Rob Petti added a comment - Which perforce plugin version are you using? That's the more important factor here, not the Jenkins version. Using slaves for polling is a valid option provided you have a solid connection to them, and they are not transient in nature. Which error message are you referring to? There are no error messages included in this ticket description.
            Hide
            frozen_man Brian Smith added a comment -

            Good point We are using 1.3.17 of the Perforce Plugin which is about a year old but after the original dates in this JIRA entry. I didn't see anything in the latest change notes related to polling so I haven't tried updating the plugin yet as a potential problem solving route.

            We are seeing the same error message as in the original post above -> "There are more SCM polling activities scheduled than handled, so the threads are not keeping up with the demands. Check if your polling is hanging, and/or increase the number of threads if necessary.". I scoured the internet looking for ways to "increase the number of threads" and hit a dead end. My further searches ended up finding this old JIRA entry that seemed to be right in line with the problem we are seeing.

            All of our nodes are currently local to our building so our connection to them should be on the solid side.

            Show
            frozen_man Brian Smith added a comment - Good point We are using 1.3.17 of the Perforce Plugin which is about a year old but after the original dates in this JIRA entry. I didn't see anything in the latest change notes related to polling so I haven't tried updating the plugin yet as a potential problem solving route. We are seeing the same error message as in the original post above -> "There are more SCM polling activities scheduled than handled, so the threads are not keeping up with the demands. Check if your polling is hanging, and/or increase the number of threads if necessary.". I scoured the internet looking for ways to "increase the number of threads" and hit a dead end. My further searches ended up finding this old JIRA entry that seemed to be right in line with the problem we are seeing. All of our nodes are currently local to our building so our connection to them should be on the solid side.
            Hide
            rpetti Rob Petti added a comment - - edited

            Yeah, you will want to update the perforce plugin to the latest. There should have been connectivity improvements since then.

            That error message is a warning generated by Jenkins Core, so we can't change it. If it shows up, then you will need to restart Jenkins or kill your polling threads. You can kill threads using the following system groovy script:

            Thread.getAllStackTraces().keySet().each(){ item ->
            if(item.getName().contains("SCM polling")){ println "Interrupting thread " + item.getId() + " " + item.getName(); item.interrupt() }
            }
            
            Show
            rpetti Rob Petti added a comment - - edited Yeah, you will want to update the perforce plugin to the latest. There should have been connectivity improvements since then. That error message is a warning generated by Jenkins Core, so we can't change it. If it shows up, then you will need to restart Jenkins or kill your polling threads. You can kill threads using the following system groovy script: Thread .getAllStackTraces().keySet().each(){ item -> if (item.getName().contains( "SCM polling" )){ println "Interrupting thread " + item.getId() + " " + item.getName(); item.interrupt() } }
            Hide
            jazzyjayx Jay Spang added a comment -

            Is there any way to resolve this issue without polling on the master? I have the same issue but polling on the master isn't viable for me.

            One limitation of that option is that the master and the slave have to be on the same OS. My Jenkins master is a Windows server, but my iOS builds (which run on an OSX server) hang polling all the time. I can't switch the job to poll on the master because it tries to run "/usr/bin/p4 changes" on the Windows master (with predictable results).

            Show
            jazzyjayx Jay Spang added a comment - Is there any way to resolve this issue without polling on the master? I have the same issue but polling on the master isn't viable for me. One limitation of that option is that the master and the slave have to be on the same OS. My Jenkins master is a Windows server, but my iOS builds (which run on an OSX server) hang polling all the time. I can't switch the job to poll on the master because it tries to run "/usr/bin/p4 changes" on the Windows master (with predictable results).
            Hide
            rpetti Rob Petti added a comment - - edited

            One limitation of that option is that the master and the slave have to be on the same OS.

            There is no such limitation. You can set up the master and slaves to use different p4 executables, even for the same job.

            That being said, you shouldn't be running into this issue with the latest version anyway.

            Show
            rpetti Rob Petti added a comment - - edited One limitation of that option is that the master and the slave have to be on the same OS. There is no such limitation. You can set up the master and slaves to use different p4 executables, even for the same job. That being said, you shouldn't be running into this issue with the latest version anyway.
            Hide
            jazzyjayx Jay Spang added a comment -

            Can you clarify how to get around this then? I do have the latest version of the Perforce plugin (1.3.26)

            • The master is Windows, and uses c:\p4\p4.exe
            • The slave is OSX, and uses /usr/bin/p4
            • I have the job configured to run on the OSX slave and use /usr/bin/p4 as the executable.

            If I check "Poll only on master", the job tries to poll by running "/usr/bin/p4" on the Windows master, which obviously fails. If I change the Perforce executable in the job to c:\p4\p4.exe, Polling will work again, but the job immediately fails to sync the workspace (because it tries to run c:\p4\p4.exe on the OSX slave).

            Show
            jazzyjayx Jay Spang added a comment - Can you clarify how to get around this then? I do have the latest version of the Perforce plugin (1.3.26) The master is Windows, and uses c:\p4\p4.exe The slave is OSX, and uses /usr/bin/p4 I have the job configured to run on the OSX slave and use /usr/bin/p4 as the executable. If I check "Poll only on master", the job tries to poll by running "/usr/bin/p4" on the Windows master, which obviously fails. If I change the Perforce executable in the job to c:\p4\p4.exe, Polling will work again, but the job immediately fails to sync the workspace (because it tries to run c:\p4\p4.exe on the OSX slave).
            Hide
            rpetti Rob Petti added a comment -

            You can override the path to P4 in the Node configuration.

            Make a new perforce installation in the global jenkins config, and set it to C:\p4\p4.exe.
            In your node configuration for your slave, override the path of this installation to point to /usr/bin/p4.
            In your job configuration, change it to use the new perforce installation you just set up.

            This is very much the same process as when setting up Java or some other utility.

            Show
            rpetti Rob Petti added a comment - You can override the path to P4 in the Node configuration. Make a new perforce installation in the global jenkins config, and set it to C:\p4\p4.exe. In your node configuration for your slave, override the path of this installation to point to /usr/bin/p4. In your job configuration, change it to use the new perforce installation you just set up. This is very much the same process as when setting up Java or some other utility.
            Hide
            alexey_larsky Alexey Larsky added a comment -

            Hi Rob,

            I've got the same issue for a years: after restarting slave(s) (polling from slave) polling is hungs.
            I can poll from master because I use special cofigured permanent workspaces on slaves and don't wish create doubles on master.
            The only one way to fix hung - restarting master. This is not convenient.
            Do you know another way to fix polling or maybe this issue can be fixed?

            Show
            alexey_larsky Alexey Larsky added a comment - Hi Rob, I've got the same issue for a years: after restarting slave(s) (polling from slave) polling is hungs. I can poll from master because I use special cofigured permanent workspaces on slaves and don't wish create doubles on master. The only one way to fix hung - restarting master. This is not convenient. Do you know another way to fix polling or maybe this issue can be fixed?
            Hide
            rpetti Rob Petti added a comment -

            Upgrade to the latest version, switch to the p4-plugin, or use the system groovy script I mentioned above to kill hung polling threads.

            Show
            rpetti Rob Petti added a comment - Upgrade to the latest version, switch to the p4-plugin, or use the system groovy script I mentioned above to kill hung polling threads.
            Hide
            alexey_larsky Alexey Larsky added a comment -

            I use 1.3.33 version - latest in LTS. It periodically hungs on slaves.
            PS. Thank you. The script is working.

            Show
            alexey_larsky Alexey Larsky added a comment - I use 1.3.33 version - latest in LTS. It periodically hungs on slaves. PS. Thank you. The script is working.
            rtyler R. Tyler Croy made changes -
            Workflow JNJira [ 140880 ] JNJira + In-Review [ 189317 ]
            ircbot Jenkins IRC Bot made changes -
            Component/s p4-plugin [ 19224 ]
            Component/s perforce-plugin [ 15506 ]

              People

              Assignee:
              Unassigned Unassigned
              Reporter:
              brianharris brianharris
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

                Dates

                Created:
                Updated:
                Resolved: