Would be nice if plugin supported buildx.

      I enabled Docker Experimental Features and set buildx as default builder, but doing so, pipeline using format such as:

       

      docker.build("calida_shop_deploy_" + env.GIT_TAG_NAME.toLowerCase(), '-f shop/Dockerfile.ci shop')

       

      pretty quickly fails with

      [Pipeline] // stage
       + docker build -t calida_shop_deploy_1bc64c3a1a902600e7e8d245a4ee4d9debc6107b -f etc/docker/shop/Dockerfile.ci shop
       unknown shorthand flag: 't' in -t
       See 'docker --help'.
      Usage: docker [OPTIONS] COMMAND
      A self-sufficient runtime for containers
      Options:
       --config string Location of client config files (default
       "/root/.docker")
       -c, --context string Name of the context to use to connect to the
       daemon (overrides DOCKER_HOST env var and
       default context set with "docker context use")
       -D, --debug Enable debug mode
       -H, --host list Daemon socket(s) to connect to
       -l, --log-level string Set the logging level
       ("debug"|"info"|"warn"|"error"|"fatal")
       (default "info")
       --tls Use TLS; implied by --tlsverify
       --tlscacert string Trust certs signed only by this CA (default
       "/root/.docker/ca.pem")
       --tlscert string Path to TLS certificate file (default
       "/root/.docker/cert.pem")
       --tlskey string Path to TLS key file (default
       "/root/.docker/key.pem")
       --tlsverify Use TLS and verify the remote
       -v, --version Print version information and quit
      Management Commands:
       builder Manage builds
       checkpoint Manage checkpoints
       config Manage Docker configs
       container Manage containers
       context Manage contexts
       engine Manage the docker engine
       image Manage images
       network Manage networks
       node Manage Swarm nodes
       plugin Manage plugins
       secret Manage Docker secrets
       service Manage services
       stack Manage Docker stacks
       swarm Manage Swarm
       system Manage Docker
       trust Manage trust on Docker images
       volume Manage volumes
      Commands:
       attach Attach local standard input, output, and error streams to a running container
       build Build an image from a Dockerfile
       commit Create a new image from a container's changes
       cp Copy files/folders between a container and the local filesystem
       create Create a new container
       deploy Deploy a new stack or update an existing stack
       diff Inspect changes to files or directories on a container's filesystem
       events Get real time events from the server
       exec Run a command in a running container
       export Export a container's filesystem as a tar archive
       history Show the history of an image
       images List images
       import Import the contents from a tarball to create a filesystem image
       info Display system-wide information
       inspect Return low-level information on Docker objects
       kill Kill one or more running containers
       load Load an image from a tar archive or STDIN
       login Log in to a Docker registry
       logout Log out from a Docker registry
       logs Fetch the logs of a container
       pause Pause all processes within one or more containers
       port List port mappings or a specific mapping for the container
       ps List containers
       pull Pull an image or a repository from a registry
       push Push an image or a repository to a registry
       rename Rename a container
       restart Restart one or more containers
       rm Remove one or more containers
       rmi Remove one or more images
       run Run a command in a new container
       save Save one or more images to a tar archive (streamed to STDOUT by default)
       search Search the Docker Hub for images
       start Start one or more stopped containers
       stats Display a live stream of container(s) resource usage statistics
       stop Stop one or more running containers
       tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
       top Display the running processes of a container
       unpause Unpause all processes within one or more containers
       update Update configuration of one or more containers
       version Show the Docker version information
       wait Block until one or more containers stop, then print their exit codes
      Run 'docker COMMAND --help' for more information on a command.
      

      Which is weird, because doing `docker build --help` clearly lists `-t` as supported

       

      docker build --help
      Usage: docker [OPTIONS] COMMAND
      A self-sufficient runtime for containers
      Options:
       --config string Location of client config files (default "/root/.docker")
       -c, --context string Name of the context to use to connect to the daemon (overrides DOCKER_HOST env var and default context set with "docker context use")
       -D, --debug Enable debug mode
       -H, --host list Daemon socket(s) to connect to
       -l, --log-level string Set the logging level ("debug"|"info"|"warn"|"error"|"fatal") (default "info")
       --tls Use TLS; implied by --tlsverify
       --tlscacert string Trust certs signed only by this CA (default "/root/.docker/ca.pem")
       --tlscert string Path to TLS certificate file (default "/root/.docker/cert.pem")
       --tlskey string Path to TLS key file (default "/root/.docker/key.pem")
       --tlsverify Use TLS and verify the remote
       -v, --version Print version information and quit
      Management Commands:
       builder Manage builds
       checkpoint Manage checkpoints
       config Manage Docker configs
       container Manage containers
       context Manage contexts
       engine Manage the docker engine
       image Manage images
       network Manage networks
       node Manage Swarm nodes
       plugin Manage plugins
       secret Manage Docker secrets
       service Manage services
       stack Manage Docker stacks
       swarm Manage Swarm
       system Manage Docker
       trust Manage trust on Docker images
       volume Manage volumes
      Commands:
       attach Attach local standard input, output, and error streams to a running container
       build Build an image from a Dockerfile
       commit Create a new image from a container's changes
       cp Copy files/folders between a container and the local filesystem
       create Create a new container
       deploy Deploy a new stack or update an existing stack
       diff Inspect changes to files or directories on a container's filesystem
       events Get real time events from the server
       exec Run a command in a running container
       export Export a container's filesystem as a tar archive
       history Show the history of an image
       images List images
       import Import the contents from a tarball to create a filesystem image
       info Display system-wide information
       inspect Return low-level information on Docker objects
       kill Kill one or more running containers
       load Load an image from a tar archive or STDIN
       login Log in to a Docker registry
       logout Log out from a Docker registry
       logs Fetch the logs of a container
       pause Pause all processes within one or more containers
       port List port mappings or a specific mapping for the container
       ps List containers
       pull Pull an image or a repository from a registry
       push Push an image or a repository to a registry
       rename Rename a container
       restart Restart one or more containers
       rm Remove one or more containers
       rmi Remove one or more images
       run Run a command in a new container
       save Save one or more images to a tar archive (streamed to STDOUT by default)
       search Search the Docker Hub for images
       start Start one or more stopped containers
       stats Display a live stream of container(s) resource usage statistics
       stop Stop one or more running containers
       tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
       top Display the running processes of a container
       unpause Unpause all processes within one or more containers
       update Update configuration of one or more containers
       version Show the Docker version information
       wait Block until one or more containers stop, then print their exit codes
      Run 'docker COMMAND --help' for more information on a command.
      

       

          [JENKINS-61372] Docker buildx support

          dor s added a comment -

          Not sure why, but docker.withRegistry from some reason is limiting access to my the buildx plugin
           

          docker.withRegistry("https://${registry}", "ecr:us-west-1:jenkins-agent") {      
            sh("docker buildx version")   
          }

           
           
          output:

          17:24:48 + docker buildx version
          17:24:48 docker: 'buildx' is not a docker command.
          17:24:48 See 'docker --help'

           
          and without using docker.withRegistry I'm getting the following output
          output:

          17:24:48 + docker buildx version
          17:24:48 github.com/docker/buildx v0.6.0 d9ee3b134cbc2d09513fa7fee4176a3919e05887

           
          It's not permission issues, it seems like that the docker.withRegistry might has a different scope or something like that which limit the docker to access plugins?
           
          any idea how to workaround this?

          dor s added a comment - Not sure why, but  docker.withRegistry from some reason is limiting access to my the buildx plugin   docker.withRegistry( "https: //${registry}" , "ecr:us-west-1:jenkins-agent" ) {   sh( "docker buildx version" ) }     output: 17:24:48 + docker buildx version 17:24:48 docker: 'buildx' is not a docker command. 17:24:48 See 'docker --help'   and without using  docker.withRegistry I'm getting the following output output: 17:24:48 + docker buildx version 17:24:48 github.com/docker/buildx v0.6.0 d9ee3b134cbc2d09513fa7fee4176a3919e05887   It's not permission issues, it seems like that the  docker.withRegistry  might has a different scope or something like that which limit the docker to access plugins?   any idea how to workaround this?

          Daniel Qian added a comment -

          dordor  same issue. I'm pretty sure that docker buildx is installed and a builder instance is created on host machine, but only got default instance:

          docker.withRegistry("https://${registry}", "ecr:us-west-1:jenkins-agent") {
            sh 'docker buildx ls'
          }
          

          output:

          + docker buildx ls
          NAME/NODE DRIVER/ENDPOINT STATUS  PLATFORMS
          default * docker                  
            default default         running linux/amd64, linux/386

          Daniel Qian added a comment - dordor   same issue. I'm pretty sure that docker buildx is installed and a builder instance is created on host machine, but only got default instance: docker.withRegistry( "https: //${registry}" , "ecr:us-west-1:jenkins-agent" ) { sh 'docker buildx ls' } output: + docker buildx ls NAME/NODE DRIVER/ENDPOINT STATUS PLATFORMS default * docker default default running linux/amd64, linux/386

          We are issuing the same Problem. We are getting following Warning on Every Build:

          DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
                      Install the buildx component to build images with BuildKit:
                      https://docs.docker.com/go/buildx/ 

          Any News about that Issue?

          Nicolo Mendola added a comment - We are issuing the same Problem. We are getting following Warning on Every Build: DEPRECATED: The legacy builder is deprecated and will be removed in a future release. Install the buildx component to build images with BuildKit: https: //docs.docker.com/go/buildx/ Any News about that Issue?

          Matt added a comment -

          gadelat

          I very much want this feature too. A small edit may help future readers... Not sure if I'm missing the obvious, but the snippets you provided actually don't show -t listed in the options. It looks like you pasted the output of docker --help instead of docker build --help

          I did similar commands on a recent (2023) version and the -t is there.

           

          % docker --version 
          Docker version 20.10.23, build 7155243
          % docker build --help
          Usage:  docker build [OPTIONS] PATH | URL | -Build an image from a DockerfileOptions:
                --add-host list           Add a custom host-to-IP mapping (host:ip)
                --build-arg list          Set build-time variables
                --cache-from strings      Images to consider as cache sources
                --disable-content-trust   Skip image verification (default true)
            -f, --file string             Name of the Dockerfile (Default is 'PATH/Dockerfile')
                --iidfile string          Write the image ID to the file
                --isolation string        Container isolation technology
                --label list              Set metadata for an image
                --network string          Set the networking mode for the RUN instructions during build (default "default")
                --no-cache                Do not use cache when building the image
            -o, --output stringArray      Output destination (format: type=local,dest=path)
                --platform string         Set platform if server is multi-platform capable
                --progress string         Set type of progress output (auto, plain, tty). Use plain to show container output (default "auto")
                --pull                    Always attempt to pull a newer version of the image
            -q, --quiet                   Suppress the build output and print image ID on success
                --secret stringArray      Secret file to expose to the build (only if BuildKit enabled): id=mysecret,src=/local/secret
                --ssh stringArray         SSH agent socket or keys to expose to the build (only if BuildKit enabled) (format: default|<id>[=<socket>|<key>[,<key>]])
            -t, --tag list                Name and optionally a tag in the 'name:tag' format
                --target string           Set the target build stage to build.
           

           

           

           

          Matt added a comment - gadelat I very much want this feature too. A small edit may help future readers... Not sure if I'm missing the obvious, but the snippets you provided actually don't show -t listed in the options. It looks like you pasted the output of docker --help instead of docker build --help I did similar commands on a recent (2023) version and the -t is there.   % docker --version Docker version 20.10.23, build 7155243 % docker build --help Usage:  docker build [OPTIONS] PATH | URL | -Build an image from a DockerfileOptions:       --add-host list           Add a custom host-to-IP mapping (host:ip)       --build-arg list          Set build-time variables       --cache-from strings      Images to consider as cache sources       --disable-content-trust   Skip image verification ( default true )   -f, --file string             Name of the Dockerfile (Default is 'PATH/Dockerfile' )       --iidfile string          Write the image ID to the file       --isolation string        Container isolation technology       --label list              Set metadata for an image       --network string          Set the networking mode for the RUN instructions during build ( default " default " )       --no-cache                Do not use cache when building the image   -o, --output stringArray      Output destination (format: type=local,dest=path)       --platform string         Set platform if server is multi-platform capable       --progress string         Set type of progress output (auto, plain, tty). Use plain to show container output ( default "auto" )       --pull                    Always attempt to pull a newer version of the image   -q, --quiet                   Suppress the build output and print image ID on success       --secret stringArray      Secret file to expose to the build (only if BuildKit enabled): id=mysecret,src=/local/secret       --ssh stringArray         SSH agent socket or keys to expose to the build (only if BuildKit enabled) (format: default |<id>[=<socket>|<key>[,<key>]])   -t, --tag list                Name and optionally a tag in the 'name:tag' format       --target string           Set the target build stage to build.      

          Vegard Hagen added a comment - - edited

          We found a solution for using docker buildx inside docker.withRegistry(...){...} that works with our setup.

          The issue is that docker buildx picks up DOCKER_CONFIG which is changed by docker.withRegistry(...){...}. docker buildx uses this env-variable to look for builders info unless you also have a BUILDX_CONFIG env variable set. Our solution was to explicitly set

          BUILDX_CONFIG=/home/jenkins/.docker/buildx
          

          which is the default folder where builders info is stored (assuming the command is run as the jenkins user).

          What docker.withRegistry(...){...} (and withDockerRegistry(...){...}) does is change the DOCKER_CONFIG env variable to the tmp-workspace folder and runs docker login to create a config.json file with the credentials for the registry.

          I suppose an alternative solution is to create builders inside withDockerRegistry(...){...}, but they would be ephemeral or just "unreachable" after the closure.

          Vegard Hagen added a comment - - edited We found a solution for using docker buildx inside docker.withRegistry(...){... } that works with our setup. The issue is that docker buildx picks up DOCKER_CONFIG which is changed by docker.withRegistry(...){... }. docker buildx uses this env-variable to look for builders info unless you also have a BUILDX_CONFIG env variable set. Our solution was to explicitly set BUILDX_CONFIG=/home/jenkins/.docker/buildx which is the default folder where builders info is stored (assuming the command is run as the jenkins user). What docker.withRegistry(...){... } (and withDockerRegistry(...){... } ) does is change the DOCKER_CONFIG env variable to the tmp-workspace folder and runs docker login to create a config.json file with the credentials for the registry. I suppose an alternative solution is to create builders inside withDockerRegistry(...){... }, but they would be ephemeral or just "unreachable" after the closure.

          Amit Dar added a comment -

          we also need this feature.

          any update regarding the implementation?

          Amit Dar added a comment - we also need this feature. any update regarding the implementation?

          Matt added a comment - - edited

          Edit: Complete rewrite, got it to work.

          I was trying to follow vegardbeid and gadelat notes but eventually discovered much of the past turbulence is increasingly solved with newer versions of docker.

          First, I'm using an AmazonLinux2 OS with docker version 25.0.5. What I did:

          1. Build a Jenkins node AMI with Amazon Linux 2, install qemu-user-static and then on boot systemd-binfmt registers what qemu installed.
          2. Update /etc/docker/daemon.json to use containerd image-store which enables multi-arch images locally. (Required if you want to build, tag, then push)
          3. Create a mybuilder as the jenkins user in home /var/lib/jenkins/.docker/buildx
          4. The default config (at some recent version of docker) is for docker build to use buildx, so no need to pre-configure that.
          5. Wrap the build in withDockerRegistry() 
          6. Inject a specific builder name, along with platform specs `docker.build(imageName, "–-builder mybuilder --platform linux/arm64,linux/amd64 --push .")
          7. It's also possible to do a local --load and then do a docker push if additional tags need to be applied.

          I had additional confusion of pushing to an Artifactory hosted private registry and the permissions for the jenkins user required additional privileges compared to doing single arch builds. I am currently giving it full admin as that's the only thing I could get to work.

          Matt added a comment - - edited Edit: Complete rewrite, got it to work. I was trying to follow vegardbeid and gadelat notes but eventually discovered much of the past turbulence is increasingly solved with newer versions of docker. First, I'm using an AmazonLinux2 OS with docker version 25.0.5. What I did: Build a Jenkins node AMI with Amazon Linux 2, install qemu-user-static and then on boot systemd-binfmt registers what qemu installed. Update /etc/docker/daemon.json to use containerd image-store which enables multi-arch images locally. (Required if you want to build, tag, then push) Create a mybuilder as the jenkins user in home /var/lib/jenkins/.docker/buildx The default config (at some recent version of docker) is for docker build to use buildx, so no need to pre-configure that. Wrap the build in withDockerRegistry()   Inject a specific builder name, along with platform specs ` docker.build(imageName, "–-builder mybuilder --platform linux/arm64,linux/amd64 --push .") It's also possible to do a local --load and then do a docker push if additional tags need to be applied. I had additional confusion of pushing to an Artifactory hosted private registry and the permissions for the jenkins user required additional privileges compared to doing single arch builds. I am currently giving it full admin as that's the only thing I could get to work.

            Unassigned Unassigned
            gadelat Gabriel Ostrolucký
            Votes:
            9 Vote for this issue
            Watchers:
            11 Start watching this issue

              Created:
              Updated: