When scanning for changes in multibranch pipelines we use p4 changes -m1 //Path/...@now. This is triggering a similar resource behavior to:
For example with a big dataset in the multibranch scan log we see:
p4 changes -m1 //Path/branch/...@now Request too large (over 500000); see 'p4 help maxresults'. ERROR: [Mon Mar 30 13:07:05 CEST 2020] Could not fetch branches from source d5fe37f7-6c40-4111-af23-d4f539fec120 com.perforce.p4java.exception.RequestException: Request too large (over 500000); see 'p4 help maxresults'.
When checking this at the command line we see that limiting the query or dropping '@now' works:
$ p4 changes -m1 //path/branch/...@now Request too large (over 500000); see 'p4 help maxresults'.$ p4 changes -m1 //path/branch/...@2020/03/30 Request too large (over 500000); see 'p4 help maxresults'.$ p4 changes -m1 //path/branch/... Change 62552 on 2020/03/30 by user@PC1 'CL Description'$ p4 changes -m1 //path/branch/...@52552,62552 Change 62552 on 2020/03/30 by user@PC1 'CL Description'
Would be great if we could either drop the usage of '@now' or restrict to the last n changes.
- is related to
-
JENKINS-57870 Request too large for server memory since 10.1.0
-
- Closed
-
[JENKINS-61745] Scan multibranch with date range or dop '@now'
Link |
New:
This issue is related to |
Environment | Original: p4-plugin 1.10.9 | New: p4-plugin 10.1.9 |
Labels | New: P4_VERIFY |
Status | Original: Open [ 1 ] | New: In Progress [ 3 ] |
Assignee | New: Paul Allen [ p4paul ] | |
Resolution | New: Fixed [ 1 ] | |
Status | Original: In Progress [ 3 ] | New: Closed [ 6 ] |
I doubt the p4d is scanning the wrong way (the -m1 flag should be minimal load). The Build user should not have MaxResults set; perhaps MaxLockTime would be more appropriate or run off a replica to minimise load.
Notes: Possibly apply ConnectionHelper::getHeadLimit() to AbstractP4ScmSource::findLatestChange(...)