• Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Minor Minor
    • p4-plugin
    • - Jenkins v2.204.6
      - P4 plugin v1.11.5

      Referring to case id 00895236.

      When using a localhost proxy (127.0.0.1:1666) (for each build slave) one has to specify this as part of a Perforce password based credential.

      Then even the server (CI master) contacts the specified proxy although no data is pulled: data folder is empty, log file is empty.

      My current workaround is to install a local proxy on the master node to be able to use a local proxy on the slaves.

      Please either remove this "alive test" or make it somehow optional/configurable.

          [JENKINS-68136] Jenkins master needs local proxy installed

          Karl Wirth added a comment -

          Confirm that even for builds that run 100% on the slave the master still sends Perforce commands.

          Reproduction Steps:

          (1) Create a slave on the P4D server.

          (2) Set Jenkins credential to be localhost:1666 (valid on slave but not on master).

          (3) Create pipeline job and put a script that runs on the slave in the Pipeline definition add a script that runs on a build slave. For example:

           

          pipeline {
           options { skipDefaultCheckout() }
           agent { label 'LinuxDesktop' }
          stages {
           stage('Hello') {
           steps {
           p4sync charset: 'none', credential: 'DesktopP4DLocalhost', populate: autoClean(delete: true, modtime: false, parallel: [enable: false, minbytes: '1024', minfiles: '1', threads: '4'], pin: '', quiet: true, replace: true, tidy: false), source: depotSource('//depot/SkipDefault/...')
           }
           }
           }
          }
          

           

          (4) Execute job. Following error is seen:

           

          Running on LinuxDesktop
          in /filestoreSSD/Vagrant/Swarm/21.2/jenkins_node/working/workspace/TestOnDesktopLocalHostOnSlave [Pipeline] { [Pipeline] stage [Pipeline] { (Hello) [Pipeline]   p4sync Executor number at runtime: 0   P4: Connection retry: 1  
          Unable to connect to Perforce server at localhost:1666
          

           

          Karl Wirth added a comment - Confirm that even for builds that run 100% on the slave the master still sends Perforce commands. Reproduction Steps: (1) Create a slave on the P4D server. (2) Set Jenkins credential to be localhost:1666 (valid on slave but not on master). (3) Create pipeline job and put a script that runs on the slave in the Pipeline definition add a script that runs on a build slave. For example:   pipeline { options { skipDefaultCheckout() } agent { label 'LinuxDesktop' } stages { stage( 'Hello' ) { steps { p4sync charset: 'none' , credential: 'DesktopP4DLocalhost' , populate: autoClean(delete: true , modtime: false , parallel: [enable: false , minbytes: '1024' , minfiles: '1' , threads: '4' ], pin: '', quiet: true , replace: true , tidy: false ), source: depotSource(' //depot/SkipDefault/...') } } } }   (4) Execute job. Following error is seen:   Running on LinuxDesktop in /filestoreSSD/Vagrant/Swarm/21.2/jenkins_node/working/workspace/TestOnDesktopLocalHostOnSlave [Pipeline] { [Pipeline] stage [Pipeline] { (Hello) [Pipeline]   p4sync Executor number at runtime: 0   P4: Connection retry: 1   Unable to connect to Perforce server at localhost:1666  

          This one is almost a year old. Any hopes that it gets out of backlog into planning for next release?

          Heiko Nardmann added a comment - This one is almost a year old. Any hopes that it gets out of backlog into planning for next release?

          Karl Wirth added a comment -

          Hi heiko_nardmann - Can you remind me what was the business reason why you need to have localhost:1666 instead of P4HostName:1666?

          Karl Wirth added a comment - Hi heiko_nardmann - Can you remind me what was the business reason why you need to have localhost:1666 instead of P4HostName:1666?

          I have two locations for my CI slaves:

          1. on-premise I have a lot of slaves: those are located in Freiburg, Germany
          2. inside cloud (Azure) I have another one: that is in Azure Dublin

          Both groups shall use a Peforce proxy: first group shall use an on-premise Perforce proxy in Freiburg, 2nd one shall use a Perforce proxy on the machine itself as the connection between Azure Dublin and our site in Freiburg is far too slow.

          So as I'm in the need to contact a localhost based Perforce proxy in Azure I also need a dedicated Perforce based credentialId containing "127.0.0.1:1999". Now the P4 plugin has the problematic behaviour that it tries to contact the proxy given by P4PORT inside the credentialId. Of course normally there shouldn't be any need for a Perforce proxy on the Jenkins server to be installed.

          I really wonder how other customers with the combination (Jenkins, Perforce, Azure slaves) get along with this ... maybe they don't have any proxy on the Azure slave. Or they have Jenkins inside Azure as well. Or they accept the slow file transfer. Or they use an Azure data center "right next" to them - not Germany <-> Ireland.

          Of course I hope that I can get rid of this Azure system this year but I'm not sure about this.

          So the priority is still 'minor' but maybe one can check what this initial communication to the P4PORT inside the credentialId is for. If we know this then we might find a better way to circumvent this need for a local proxy on the Jenkins server: e.g. adding the ability to configure a 2nd master credentialId for this initial communication. I.e. distinguishing between the Perforce proxy that the Jenkins server uses and the one that a slave is using.

          Heiko Nardmann added a comment - I have two locations for my CI slaves: on-premise I have a lot of slaves: those are located in Freiburg, Germany inside cloud (Azure) I have another one: that is in Azure Dublin Both groups shall use a Peforce proxy: first group shall use an on-premise Perforce proxy in Freiburg, 2nd one shall use a Perforce proxy on the machine itself as the connection between Azure Dublin and our site in Freiburg is far too slow. So as I'm in the need to contact a localhost based Perforce proxy in Azure I also need a dedicated Perforce based credentialId containing " 127.0.0.1:1999 ". Now the P4 plugin has the problematic behaviour that it tries to contact the proxy given by P4PORT inside the credentialId . Of course normally there shouldn't be any need for a Perforce proxy on the Jenkins server to be installed. I really wonder how other customers with the combination (Jenkins, Perforce, Azure slaves) get along with this ... maybe they don't have any proxy on the Azure slave. Or they have Jenkins inside Azure as well. Or they accept the slow file transfer. Or they use an Azure data center "right next" to them - not Germany <-> Ireland. Of course I hope that I can get rid of this Azure system this year but I'm not sure about this. So the priority is still 'minor' but maybe one can check what this initial communication to the P4PORT inside the credentialId is for. If we know this then we might find a better way to circumvent this need for a local proxy on the Jenkins server: e.g. adding the ability to configure a 2nd master credentialId for this initial communication. I.e. distinguishing between the Perforce proxy that the Jenkins server uses and the one that a slave is using.

          Btw: I would have run into this problem even when installing a local Perforce proxy on my on-prem CI slaves. Which I currently still consider.

          Heiko Nardmann added a comment - Btw: I would have run into this problem even when installing a local Perforce proxy on my on-prem CI slaves. Which I currently still consider.

          Any news here?

          Heiko Nardmann added a comment - Any news here?

          Karl Wirth added a comment -

          Hello heiko_nardmann . I can ping the developers to get them  to take a look but this is a weird one because no one else seems to have tried it this way.

          One thought is maybe the the hosts file/local DNS is used to point each location to a local perforce:1666. If you had auth.id set that may be valid. Have you tried that (have perforce point to real perforce on master and perforce point to localhost on slaves)?

          Karl Wirth added a comment - Hello heiko_nardmann . I can ping the developers to get them  to take a look but this is a weird one because no one else seems to have tried it this way. One thought is maybe the the hosts file/local DNS is used to point each location to a local perforce:1666. If you had auth.id set that may be valid. Have you tried that (have perforce point to real perforce on master and perforce point to localhost on slaves)?

            Unassigned Unassigned
            heiko_nardmann Heiko Nardmann
            Votes:
            1 Vote for this issue
            Watchers:
            2 Start watching this issue

              Created:
              Updated: