Jenkins handles a monorepo well enough when using git scm with ssh clone. If we have the following repo structure:
We create one multibranch pipeline per Jenkinsfile in order to have many independent pipelines (CI + CD) for our projects. These projects all share common code through jenkins shared libraries and everything works ok.
However, if we want to use the github branch source and want to enable webhooks from github we immediately run into rate limit issues. We have a large repository - thousands of pushes a day. Every time a webhook is fired from github and hits jenkins all N multibranch pipelines begin scanning ALL remote refs for the repository. In our repository there are 10s of thousands of tags.
A single scan on a single multibranch pipeline eliminates our entire github rate limit budget.
I am not entirely sure why jenkins must rescan the entire repo. It seems at the very least the webhook from github contains all information about the ref to be built and the individual jobs could simply take the webhook at face value and build/delete/etc without requiring a full repository scan.
Is this possible? Why does Jenkins currently scan the entire repo?