-
Bug
-
Resolution: Fixed
-
Minor
-
Powered by SuggestiMate
we have 2 pipelines that performs simple checkout on 2 different branches, scheduled to run at the same time on 2 different nodes every day, say Job-1 & Job-2
sometimes we will get error like this, for example for Job-1
P4 Task: attempt: 1
P4 Task: failed: com.perforce.p4java.exception.P4JavaException: com.perforce.p4java.exception.P4JavaException: hudson.AbortException: P4JAVA: Error(s):
Path 'C:\Jenkins\workspace\Job-2/...' is not under client's root 'C:\Jenkins\workspace\Job-2.
looks like some sort of race condition going on, it doesn't happen every time. any ideas?
[JENKINS-55248] workspace mixup when 2 jobs running on 2 diff nodes at the same time
Hi,
Any update on this one?
I get a similar issue on my setup.
I've several independent pipeline jobs.
Each job, when triggered, does a p4 checkout.
When multible jobs are triggered at the same time the P4CLIENT seems to be inconsistent between jobs and some of the jobs fail because they are using the client from a different job. It seems that the P4CLIENT variable is being shared between different checkout processes.
The jobs that get the wrong P4CLIENT fail with the error reported previously:
ERROR: P4: Task Exception: hudson.AbortException: P4JAVA: Error(s):
Path '/correct/path/for/this/job/...' is not under client's root '/path/from/parallel/job'.
Each job is using the checkout in the following manner:
{{def p4_ws_name = 'jenkins_${env.JOB_NAME}'
def workspace_view = """\
//my_depot/... //${p4_ws_name}/...
""".stripIndent()
pipeline {
agent any
options
stages {
stage('Checkout') {
steps
}
stage('Something after') {
steps
}
}
}}}
Some help would be appreciated.
Thanks,
Frederico
PS: I've tried with manually triggered builds, timer triggered, and SCM, all produce the same issue if triggered at the same time.
Hi, samica,
We haven't finished the investigation yet but we think it may be due to hard coded workspace names.
In the above example you have 'p4_ws_name'. Is this a hard coded name in the real script?
This needs to be unique for every node it runs on. Therefore we recommend you use a naming convention such as
jenkins-${NODE_NAME}${JOB_NAME}${EXECUTOR_NUMBER}
Are you able to try this and see if it resolves the problem. If not let me know and we can start a support case.
Hi p4karl,
In my case the ws name is defined by "def p4_ws_name = 'jenkins_${env.JOB_NAME}'"
The name is only based on the job name because each job always uses the same space with only incremental updates.
Also, I dont allow different iterations of the same job to run concurrently, only different jobs may run at the same time.
It sounds weird to me to have a different workspace names for each time a given job is running, which will also create a load of redundant workspaces in p4 (it would create #jobs*#executors*#nodes workspaces when I only require #jobs).
I will anyway give it a try for the sake of investigation.
Further information: I've actually tried to run all jobs just in the master node (single node) with multiple executors and I see the same issue.
Hi samica, Unfortunately the extra workspaces are necessary to ensure data consistency. Perforce workspaces rely on the data being in the same location on the same node. For example - if due to multiple executions the job is running in a different Jenkins workspace location then you are going to get inconsistent results unless the workspace is unique to that location. The same goes for nodes. In general you should never use a Perforce workspace on two different nodes due to the have list that records the files saved to that location.
However if my suggestion above doesn't fix it we can take this offline so I can request more confidential information about your system.
Hi p4karl,
Thanks for your quick feedback. Unfortunately still no luck
I should clarify that:
- I'm only using the master node (single node)
- a given job always uses the same folder/location (unless Jenkins is moving folders around in the background),
- each p4 workspace is unique to a job (i.e, jobA always does taskA which uses workspace A, jobB always does taskB which uses workspace B), the differences are just incremental syncs with p4 (i.e., update workspace with new changelists)
- I only allow one instance of each job to run at a given time to avoid clashes
- a job can run in any executor
I'v modified my script as so:
{{ def p4_ws_name def workspace_view pipeline { agent any options { timestamps() } stages { stage('Checkout') { steps { script { // need to initialize the global variable here because some env vars are only available inside the agent p4_ws_name = 'jenkins_${JOB_NAME}_${NODE_NAME}_${EXECUTOR_NUMBER}' workspace_view """\ //my_depot/... //${p4_ws_name}/... """.stripIndent() } checkout perforce( credential: 'xxxx', populate: autoClean( delete: true, modtime: false, parallel: [ enable: false, minbytes: '1024', minfiles: '1', threads: '4'], pin: '', quiet: true, replace: true, tidy: false ), workspace: manualSpec( charset: 'none', cleanup: false, name: p4_ws_name, pinHost: false, spec: clientSpec( allwrite: false, backup: true, changeView: '', clobber: true, compress: false, line: 'LOCAL', locked: false, modtime: false, rmdir: false, serverID: '', streamName: '', type: 'WRITABLE', view: workspace_view ) ) ) } } stage('Something after') { steps { echo "something something" } } } } }}
Hi samica. I have just sent you a private email so I can request more information. Please let me know if you don't receive it.
Solved by using a unique name per job in the workspace name used to get the Jenkinsfile.
Solved by using a unique name per job in the workspace name used to get the Jenkinsfile.
Hi mei_liu. I would like to see some more details about your setup so have sent you a request for more information via email.