• Icon: Bug Bug
    • Resolution: Not A Defect
    • Icon: Minor Minor
    • docker-plugin
    • None

      docker plugin    1.1.6

      jenkins 2.180

      docker version
      Client:
      Version: 1.13.1
      API version: 1.26
      Package version: docker-1.13.1-96.gitb2f74b2.el7.centos.x86_64
      Go version: go1.10.3
      Git commit: b2f74b2/1.13.1
      Built: Wed May 1 14:55:20 2019
      OS/Arch: linux/amd64

      Server:
      Version: 1.13.1
      API version: 1.26 (minimum version 1.12)
      Package version: docker-1.13.1-96.gitb2f74b2.el7.centos.x86_64
      Go version: go1.10.3
      Git commit: b2f74b2/1.13.1
      Built: Wed May 1 14:55:20 2019
      OS/Arch: linux/amd64
      Experimental: false

       

      ERROR LOG:

       

      16-Jun-2019 17:02:08.442 INFO Computer.threadPoolForRemoting [#29656]] io.jenkins.docker.DockerTransientNode$1.println Stopped container '1c816dd046940935d0ca56043a13bf170f81dc7bc018cd11a66708a72988c075' for node 'centos-jenkins-slave-0010muy45zjay'.
      16-Jun-2019 17:02:08.451 SEVERE [dockerjava-netty-103-12] com.github.dockerjava.core.async.ResultCallbackTemplate.onError Error during callback
      com.github.dockerjava.api.exception.ConflictException: {"message":"You cannot remove a running container 1c816dd046940935d0ca56043a13bf170f81dc7bc018cd11a66708a72988c075. Stop the container before attempting removal or use -f"}

      at com.github.dockerjava.netty.handler.HttpResponseHandler.channelRead0(HttpResponseHandler.java:107)
      at com.github.dockerjava.netty.handler.HttpResponseHandler.channelRead0(HttpResponseHandler.java:33)
      at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
      at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
      at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
      at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
      at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:241)
      at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
      at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
      at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
      at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:438)
      at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
      at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
      at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253)
      at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
      at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
      at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
      at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
      at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
      at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
      at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
      at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)
      at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
      at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
      at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)
      at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134)
      at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644)
      at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:579)
      at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:496)
      at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458)
      at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
      at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
      at java.lang.Thread.run(Thread.java:745)

          [JENKINS-58033] error when remove docker slave

          pjdarton added a comment -

          It's not a bug.

          The docker-plugin uses the dockerjava library and the dockerjava library (needlessly) logs a lot of exceptions as "SEVERE" that are part of its official API, that are perfectly normal and are passed up to the calling code (the docker-plugin) to decide what to do about it.
          The dockerjava library is a bit of a drama queen - you can ignore 99.999% of these exceptions.
          Most of what it logs is just saying something line "OMG! Red alert! Panic! Something normal happened!"

          e.g. if you ask it (dockerjava) to stop a container that's already stopped, it'll tell you that the container has already stopped, so you know that you can then go ahead and then ask it to remove it ... but it'll also log an error along the lines of "It's a total disaster! We didn't actually need to do anything after all".
          Similarly, when we're making sure a container is gone, we'll ask it to remove the container and, if there's nothing to be done because the container no longer exists, it'll log severe exceptions as it panics about the fact that what it'd been asked to do had already been done and no further action was necessary.
          Sadly, there's no way of telling it not to log the exceptions that it's also passing back to whatever called it (the docker-plugin code in this case, which knows that no panic is necessary in this situation). All you can do is filter those out of the log.

          The problem is that sometimes dockerjava really does have something worth mentioning and, in such cases, an exception will get passed up by the dockerjava code to whatever was calling it, and "whatever was calling it" will have an exception it doesn't know how to handle ... at which point "whatever was calling it" will log an error, so you'll see the dockerjava exception is immediately followed by the "whatever was calling it" error. In such circumstances, the exception that dockerjava logged may provide useful additional information as to what actually went wrong.

          TL;DR: Ignore everything logged by dockerjava unless it's also accompanied by an error raised by other (less shouty) code.

          pjdarton added a comment - It's not a bug. The docker-plugin uses the dockerjava library and the dockerjava library (needlessly) logs a lot of exceptions as "SEVERE" that are part of its official API, that are perfectly normal and are passed up to the calling code (the docker-plugin) to decide what to do about it. The dockerjava library is a bit of a drama queen - you can ignore 99.999% of these exceptions. Most of what it logs is just saying something line "OMG! Red alert! Panic! Something normal happened!" e.g. if you ask it ( dockerjava ) to stop a container that's already stopped, it'll tell you that the container has already stopped, so you know that you can then go ahead and then ask it to remove it ... but it'll also log an error along the lines of "It's a total disaster! We didn't actually need to do anything after all". Similarly, when we're making sure a container is gone, we'll ask it to remove the container and, if there's nothing to be done because the container no longer exists, it'll log severe exceptions as it panics about the fact that what it'd been asked to do had already been done and no further action was necessary. Sadly, there's no way of telling it not to log the exceptions that it's also passing back to whatever called it (the docker-plugin code in this case, which knows that no panic is necessary in this situation). All you can do is filter those out of the log. The problem is that sometimes dockerjava really does have something worth mentioning and, in such cases, an exception will get passed up by the dockerjava code to whatever was calling it, and "whatever was calling it" will have an exception it doesn't know how to handle ... at which point "whatever was calling it" will log an error, so you'll see the dockerjava exception is immediately followed by the "whatever was calling it" error. In such circumstances, the exception that dockerjava logged may provide useful additional information as to what actually went wrong. TL;DR: Ignore everything logged by dockerjava unless it's also accompanied by an error raised by other (less shouty) code.

            Unassigned Unassigned
            arthur7834 king arthur
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

              Created:
              Updated:
              Resolved: