Status: Closed (View Workflow)
When scanning for changes in multibranch pipelines we use p4 changes -m1 //Path/...@now. This is triggering a similar resource behavior to:
For example with a big dataset in the multibranch scan log we see:
p4 changes -m1 //Path/branch/...@now Request too large (over 500000); see 'p4 help maxresults'. ERROR: [Mon Mar 30 13:07:05 CEST 2020] Could not fetch branches from source d5fe37f7-6c40-4111-af23-d4f539fec120 com.perforce.p4java.exception.RequestException: Request too large (over 500000); see 'p4 help maxresults'.
When checking this at the command line we see that limiting the query or dropping '@now' works:
$ p4 changes -m1 //path/branch/...@now Request too large (over 500000); see 'p4 help maxresults'.$ p4 changes -m1 //path/branch/...@2020/03/30 Request too large (over 500000); see 'p4 help maxresults'.$ p4 changes -m1 //path/branch/... Change 62552 on 2020/03/30 by user@PC1 'CL Description'$ p4 changes -m1 //path/branch/...@52552,62552 Change 62552 on 2020/03/30 by user@PC1 'CL Description'
Would be great if we could either drop the usage of '@now' or restrict to the last n changes.
- is related to
JENKINS-57870 Request too large for server memory since 10.1.0
This change has caused a problem for us. We have a multibranch pipeline set up for a project and as soon as a stream's last revision is 1000 revisions older than the server's latest (this corresponds to DEFAULT_HEAD_LIMIT in PerforceSCM.java) the multibranch scanning will not find any changes for that stream, default to using the "latest" revision ie. the whole server's latest, and triggers a build. You can see this in a snippet from our multibranch "Scan Multibranch Pipeline Log":
... p4 streams //Project/... + ... p4 login -s + ... p4 client -o jenkinsTemp-5189b562-c22b-4b8c-a28d-bd862e2c51d6 + ... p4 client -i + ... p4 client -o jenkinsTemp-5189b562-c22b-4b8c-a28d-bd862e2c51d6 + ... p4 info + ... p4 info + ... p4 client -o jenkinsTemp-5189b562-c22b-4b8c-a28d-bd862e2c51d6 + ... p4 client -i + ... View: + ... p4 counter change + ... p4 counter change + ... p4 changes -m1 -ssubmitted //jenkinsTemp-5189b562-c22b-4b8c-a28d-bd862e2c51d6/...@1___ + P4: no revisions under //jenkinsTemp-5189b562-c22b-4b8c-a28d-bd862e2c51d6/...@149283,150283 using change: 150283 Scanning for //Project/some_stream/Jenkinsfile ... p4 files -e //jenkinsTemp-5189b562-c22b-4b8c-a28d-bd862e2c51d6/Jenkinsfile + ‘Jenkinsfile’ found Changes detected: some_stream (149129 → 150283)
What this means in practice is that a lot of streams with no new changes are being triggered for every single change to our Perforce server and clogging up our build pipeline with builds that should not have been triggered in the first place. We have had to disable our multibranch pipeline to prevent this.
It seems to me that the plugin should default to not building if there are no revisions in the range instead of defaulting to triggering a build every time?
Created a bug to track the issue: https://issues.jenkins.io/browse/JENKINS-64193
Hi stuartrowe - Does increasing 'Head change query limit' woraround the behavior?
Jenkins > Manage Jenkins > Configure System > Head change query limit
I doubt the p4d is scanning the wrong way (the -m1 flag should be minimal load). The Build user should not have MaxResults set; perhaps MaxLockTime would be more appropriate or run off a replica to minimise load.
Notes: Possibly apply ConnectionHelper::getHeadLimit() to AbstractP4ScmSource::findLatestChange(...)