-
Bug
-
Resolution: Unresolved
-
Major
-
None
-
zap plugin 1.1.0
zaproxy 2.7.0 (with HSQL DB)
jenkins 2.89.3
ubuntu 14.04
ZAP plugin displays some progress information during execution of the analysis, with messages like this, every 5 seconds:
[ZAP Jenkins Plugin] ACTIVE SCAN STATUS [ 88% ] [ZAP Jenkins Plugin] ALERTS COUNT [ 38 ] [ZAP Jenkins Plugin] MESSAGES COUNT [ 25749 ]
The "MESSAGES COUNT" comes from a call to the "/core/view/numberOfMessages" zaproxy API:
This API seems to be implemented by iterating over all the messages currently present in the database, using a "CounterProcessor":
https://github.com/zaproxy/zaproxy/blob/2.7.0/src/org/zaproxy/zap/extension/api/CoreAPI.java#L1005
[https://github.com/zaproxy/zaproxy/blob/2.7.0/src/org/zaproxy/zap/extension/api/CoreAPI.java#L1532
] https://github.com/zaproxy/zaproxy/blob/2.7.0/src/org/zaproxy/zap/extension/api/CoreAPI.java#L1575
When the HSQL DB gets big enough (when relevant data don't fit in the memory cache anymore, I assume), this naive implementation actually becomes a huge performance issue. I've monitored the zaproxy process with "iotop", and on my test case it shows ~8GB disk read during the job execution (for ~750MB disk write, and a final database size of ~1.2GB on disk).
Here is now the same example log lines as above, with their actual timestamps (from timestamper plugin):
00:22:10.300 00:22:10.302 [ZAP Jenkins Plugin] ACTIVE SCAN STATUS [ 88% ] 00:22:10.433 [ZAP Jenkins Plugin] ALERTS COUNT [ 38 ] 00:23:00.887 [ZAP Jenkins Plugin] MESSAGES COUNT [ 25749 ] 00:23:00.888
See the 50 seconds gap between two messages?
I have rebuilt it without the "MESSAGES COUNT" messages and without the "numberOfMessages" API calls, and:
- my test case job executes faster (average of ~20 minutes instead of ~28 minutes)
- the disk read load has reduced a lot (~1GB instead of ~8GB - the remaining read load appears at the very end of the job execution, during zaproxy shutdown, when it deletes its temporary data from the DB, but that's a different story)
A few words of context: I'm managing many (400+) Jenkins instances on a private cloud, with ~1000 VM. Last month, a single Jenkins slave attached to a project which started using ZAP plugin have been responsible for ~25% of the disk I/O of the whole platform. That's why I've looked into this a bit closer...
I will make a PR for removing these "MESSAGES COUNT" logs in the plugin. Although this issue is, in the end, caused by a zaproxy bug, I really think your plugin should avoid hitting it so hard (also, because your plugin doesn't require a specific zaproxy version, any hypothetical fix on the zaproxy side could actually take quite some time to get effective).
[JENKINS-49440] Huge disk read load because of the progress log
Description |
Original:
ZAP plugin displays some progress information during execution of the analysis, with messages like this, every 5 seconds: [ZAP Jenkins Plugin] ACTIVE SCAN STATUS [ 88% ] [ZAP Jenkins Plugin] ALERTS COUNT [ 38 ] [ZAP Jenkins Plugin] MESSAGES COUNT [ 25749 ] The "MESSAGES COUNT" comes from a call to the "/core/view/numberOfMessages" zaproxy API: [https://github.com/jenkinsci/zap-plugin/blob/zap-1.1.0/src/main/java/org/jenkinsci/plugins/zap/ZAPDriver.java#L2099] This API seems to be implemented by iterating over all the messages currently present in the database, using a "CounterProcessor": [https://github.com/zaproxy/zaproxy/blob/2.7.0/src/org/zaproxy/zap/extension/api/CoreAPI.java#L1005] [https://github.com/zaproxy/zaproxy/blob/2.7.0/src/org/zaproxy/zap/extension/api/CoreAPI.java#L1532 ] [https://github.com/zaproxy/zaproxy/blob/2.7.0/src/org/zaproxy/zap/extension/api/CoreAPI.java#L1575] When the HSQL DB gets big enough (when relevant data don't fit in the memory cache anymore, I assume), this naive implementation actually becomes a huge performance issue. I've monitored the zaproxy process with "iotop", and on my test case it shows ~8GB disk read during the job execution (for ~750MB disk write, and a final database size of ~1.2GB on disk). Here is now the same example log lines as above, with their actual timestamps (from timestamper plugin): *00:22:10.300* *00:22:10.302* [ZAP Jenkins Plugin] ACTIVE SCAN STATUS [ 88% ]*00:22:10.433* [ZAP Jenkins Plugin] ALERTS COUNT [ 38 ]*00:23:00.887* [ZAP Jenkins Plugin] MESSAGES COUNT [ 25749 ]*00:23:00.888* See the 50 seconds gap between two messages? :) I have rebuilt it without the "MESSAGES COUNT" messages and without the "numberOfMessages" API calls, and: - my test case job executes faster (average of ~20 minutes instead of ~28 minutes) - the disk read load has reduced a lot (~1GB instead of ~8GB - the remaining read load appears at the very end of the job execution, during zaproxy shutdown, when it deletes its temporary data from the DB, but that's a different story) A few words of context: I'm managing many (400+) Jenkins instances on a private cloud, with ~1000 VM. Last month, a single Jenkins slave attached to a project which started using ZAP plugin have been responsible for ~25% of the disk I/O of the whole platform. That's why I've looked into this a bit closer... I will make a PR for removing these "MESSAGES COUNT" logs in the plugin. Although this issue is, in the end, caused by a zaproxy bug, I really think your plugin should avoid hitting it so hard (also, because your plugin doesn't require a specific zaproxy version, any hypothetical fix on the zaproxy side could actually take quite some time to get effective). |
New:
ZAP plugin displays some progress information during execution of the analysis, with messages like this, every 5 seconds: {{ [ZAP Jenkins Plugin] ACTIVE SCAN STATUS [ 88% ]}} {{ [ZAP Jenkins Plugin] ALERTS COUNT [ 38 ]}} {{ [ZAP Jenkins Plugin] MESSAGES COUNT [ 25749 ]}} The "MESSAGES COUNT" comes from a call to the "/core/view/numberOfMessages" zaproxy API: [https://github.com/jenkinsci/zap-plugin/blob/zap-1.1.0/src/main/java/org/jenkinsci/plugins/zap/ZAPDriver.java#L2099] This API seems to be implemented by iterating over all the messages currently present in the database, using a "CounterProcessor": [https://github.com/zaproxy/zaproxy/blob/2.7.0/src/org/zaproxy/zap/extension/api/CoreAPI.java#L1005] [https://github.com/zaproxy/zaproxy/blob/2.7.0/src/org/zaproxy/zap/extension/api/CoreAPI.java#L1532 ] [https://github.com/zaproxy/zaproxy/blob/2.7.0/src/org/zaproxy/zap/extension/api/CoreAPI.java#L1575] When the HSQL DB gets big enough (when relevant data don't fit in the memory cache anymore, I assume), this naive implementation actually becomes a huge performance issue. I've monitored the zaproxy process with "iotop", and on my test case it shows ~8GB disk read during the job execution (for ~750MB disk write, and a final database size of ~1.2GB on disk). Here is now the same example log lines as above, with their actual timestamps (from timestamper plugin): {{ *00:22:10.300*}} {{*00:22:10.302* [ZAP Jenkins Plugin] ACTIVE SCAN STATUS [ 88% ]**}} {{*00:22:10.433* [ZAP Jenkins Plugin] ALERTS COUNT [ 38 ]**}} {{*00:23:00.887* [ZAP Jenkins Plugin] MESSAGES COUNT [ 25749 ]**}} {{*00:23:00.888* }} See the 50 seconds gap between two messages? :) I have rebuilt it without the "MESSAGES COUNT" messages and without the "numberOfMessages" API calls, and: - my test case job executes faster (average of ~20 minutes instead of ~28 minutes) - the disk read load has reduced a lot (~1GB instead of ~8GB - the remaining read load appears at the very end of the job execution, during zaproxy shutdown, when it deletes its temporary data from the DB, but that's a different story) A few words of context: I'm managing many (400+) Jenkins instances on a private cloud, with ~1000 VM. Last month, a single Jenkins slave attached to a project which started using ZAP plugin have been responsible for ~25% of the disk I/O of the whole platform. That's why I've looked into this a bit closer... I will make a PR for removing these "MESSAGES COUNT" logs in the plugin. Although this issue is, in the end, caused by a zaproxy bug, I really think your plugin should avoid hitting it so hard (also, because your plugin doesn't require a specific zaproxy version, any hypothetical fix on the zaproxy side could actually take quite some time to get effective). |
Description |
Original:
ZAP plugin displays some progress information during execution of the analysis, with messages like this, every 5 seconds: {{ [ZAP Jenkins Plugin] ACTIVE SCAN STATUS [ 88% ]}} {{ [ZAP Jenkins Plugin] ALERTS COUNT [ 38 ]}} {{ [ZAP Jenkins Plugin] MESSAGES COUNT [ 25749 ]}} The "MESSAGES COUNT" comes from a call to the "/core/view/numberOfMessages" zaproxy API: [https://github.com/jenkinsci/zap-plugin/blob/zap-1.1.0/src/main/java/org/jenkinsci/plugins/zap/ZAPDriver.java#L2099] This API seems to be implemented by iterating over all the messages currently present in the database, using a "CounterProcessor": [https://github.com/zaproxy/zaproxy/blob/2.7.0/src/org/zaproxy/zap/extension/api/CoreAPI.java#L1005] [https://github.com/zaproxy/zaproxy/blob/2.7.0/src/org/zaproxy/zap/extension/api/CoreAPI.java#L1532 ] [https://github.com/zaproxy/zaproxy/blob/2.7.0/src/org/zaproxy/zap/extension/api/CoreAPI.java#L1575] When the HSQL DB gets big enough (when relevant data don't fit in the memory cache anymore, I assume), this naive implementation actually becomes a huge performance issue. I've monitored the zaproxy process with "iotop", and on my test case it shows ~8GB disk read during the job execution (for ~750MB disk write, and a final database size of ~1.2GB on disk). Here is now the same example log lines as above, with their actual timestamps (from timestamper plugin): {{ *00:22:10.300*}} {{*00:22:10.302* [ZAP Jenkins Plugin] ACTIVE SCAN STATUS [ 88% ]**}} {{*00:22:10.433* [ZAP Jenkins Plugin] ALERTS COUNT [ 38 ]**}} {{*00:23:00.887* [ZAP Jenkins Plugin] MESSAGES COUNT [ 25749 ]**}} {{*00:23:00.888* }} See the 50 seconds gap between two messages? :) I have rebuilt it without the "MESSAGES COUNT" messages and without the "numberOfMessages" API calls, and: - my test case job executes faster (average of ~20 minutes instead of ~28 minutes) - the disk read load has reduced a lot (~1GB instead of ~8GB - the remaining read load appears at the very end of the job execution, during zaproxy shutdown, when it deletes its temporary data from the DB, but that's a different story) A few words of context: I'm managing many (400+) Jenkins instances on a private cloud, with ~1000 VM. Last month, a single Jenkins slave attached to a project which started using ZAP plugin have been responsible for ~25% of the disk I/O of the whole platform. That's why I've looked into this a bit closer... I will make a PR for removing these "MESSAGES COUNT" logs in the plugin. Although this issue is, in the end, caused by a zaproxy bug, I really think your plugin should avoid hitting it so hard (also, because your plugin doesn't require a specific zaproxy version, any hypothetical fix on the zaproxy side could actually take quite some time to get effective). |
New:
ZAP plugin displays some progress information during execution of the analysis, with messages like this, every 5 seconds: {{ [ZAP Jenkins Plugin] ACTIVE SCAN STATUS [ 88% ]}} {{ [ZAP Jenkins Plugin] ALERTS COUNT [ 38 ]}} {{ [ZAP Jenkins Plugin] MESSAGES COUNT [ 25749 ]}} The "MESSAGES COUNT" comes from a call to the "/core/view/numberOfMessages" zaproxy API: [https://github.com/jenkinsci/zap-plugin/blob/zap-1.1.0/src/main/java/org/jenkinsci/plugins/zap/ZAPDriver.java#L2099] This API seems to be implemented by iterating over all the messages currently present in the database, using a "CounterProcessor": [https://github.com/zaproxy/zaproxy/blob/2.7.0/src/org/zaproxy/zap/extension/api/CoreAPI.java#L1005] [https://github.com/zaproxy/zaproxy/blob/2.7.0/src/org/zaproxy/zap/extension/api/CoreAPI.java#L1532 ] [https://github.com/zaproxy/zaproxy/blob/2.7.0/src/org/zaproxy/zap/extension/api/CoreAPI.java#L1575] When the HSQL DB gets big enough (when relevant data don't fit in the memory cache anymore, I assume), this naive implementation actually becomes a huge performance issue. I've monitored the zaproxy process with "iotop", and on my test case it shows ~8GB disk read during the job execution (for ~750MB disk write, and a final database size of ~1.2GB on disk). Here is now the same example log lines as above, with their actual timestamps (from timestamper plugin): {{00:22:10.300}} {{00:22:10.302 [ZAP Jenkins Plugin] ACTIVE SCAN STATUS [ 88% ]}} {{00:22:10.433 [ZAP Jenkins Plugin] ALERTS COUNT [ 38 ]}} {{00:23:00.887 [ZAP Jenkins Plugin] MESSAGES COUNT [ 25749 ]}} {{00:23:00.888 }} See the 50 seconds gap between two messages? :) I have rebuilt it without the "MESSAGES COUNT" messages and without the "numberOfMessages" API calls, and: - my test case job executes faster (average of ~20 minutes instead of ~28 minutes) - the disk read load has reduced a lot (~1GB instead of ~8GB - the remaining read load appears at the very end of the job execution, during zaproxy shutdown, when it deletes its temporary data from the DB, but that's a different story) A few words of context: I'm managing many (400+) Jenkins instances on a private cloud, with ~1000 VM. Last month, a single Jenkins slave attached to a project which started using ZAP plugin have been responsible for ~25% of the disk I/O of the whole platform. That's why I've looked into this a bit closer... I will make a PR for removing these "MESSAGES COUNT" logs in the plugin. Although this issue is, in the end, caused by a zaproxy bug, I really think your plugin should avoid hitting it so hard (also, because your plugin doesn't require a specific zaproxy version, any hypothetical fix on the zaproxy side could actually take quite some time to get effective). |
Description |
Original:
ZAP plugin displays some progress information during execution of the analysis, with messages like this, every 5 seconds: {{ [ZAP Jenkins Plugin] ACTIVE SCAN STATUS [ 88% ]}} {{ [ZAP Jenkins Plugin] ALERTS COUNT [ 38 ]}} {{ [ZAP Jenkins Plugin] MESSAGES COUNT [ 25749 ]}} The "MESSAGES COUNT" comes from a call to the "/core/view/numberOfMessages" zaproxy API: [https://github.com/jenkinsci/zap-plugin/blob/zap-1.1.0/src/main/java/org/jenkinsci/plugins/zap/ZAPDriver.java#L2099] This API seems to be implemented by iterating over all the messages currently present in the database, using a "CounterProcessor": [https://github.com/zaproxy/zaproxy/blob/2.7.0/src/org/zaproxy/zap/extension/api/CoreAPI.java#L1005] [https://github.com/zaproxy/zaproxy/blob/2.7.0/src/org/zaproxy/zap/extension/api/CoreAPI.java#L1532 ] [https://github.com/zaproxy/zaproxy/blob/2.7.0/src/org/zaproxy/zap/extension/api/CoreAPI.java#L1575] When the HSQL DB gets big enough (when relevant data don't fit in the memory cache anymore, I assume), this naive implementation actually becomes a huge performance issue. I've monitored the zaproxy process with "iotop", and on my test case it shows ~8GB disk read during the job execution (for ~750MB disk write, and a final database size of ~1.2GB on disk). Here is now the same example log lines as above, with their actual timestamps (from timestamper plugin): {{00:22:10.300}} {{00:22:10.302 [ZAP Jenkins Plugin] ACTIVE SCAN STATUS [ 88% ]}} {{00:22:10.433 [ZAP Jenkins Plugin] ALERTS COUNT [ 38 ]}} {{00:23:00.887 [ZAP Jenkins Plugin] MESSAGES COUNT [ 25749 ]}} {{00:23:00.888 }} See the 50 seconds gap between two messages? :) I have rebuilt it without the "MESSAGES COUNT" messages and without the "numberOfMessages" API calls, and: - my test case job executes faster (average of ~20 minutes instead of ~28 minutes) - the disk read load has reduced a lot (~1GB instead of ~8GB - the remaining read load appears at the very end of the job execution, during zaproxy shutdown, when it deletes its temporary data from the DB, but that's a different story) A few words of context: I'm managing many (400+) Jenkins instances on a private cloud, with ~1000 VM. Last month, a single Jenkins slave attached to a project which started using ZAP plugin have been responsible for ~25% of the disk I/O of the whole platform. That's why I've looked into this a bit closer... I will make a PR for removing these "MESSAGES COUNT" logs in the plugin. Although this issue is, in the end, caused by a zaproxy bug, I really think your plugin should avoid hitting it so hard (also, because your plugin doesn't require a specific zaproxy version, any hypothetical fix on the zaproxy side could actually take quite some time to get effective). |
New:
ZAP plugin displays some progress information during execution of the analysis, with messages like this, every 5 seconds: {{ [ZAP Jenkins Plugin] ACTIVE SCAN STATUS [ 88% ]}} {{ [ZAP Jenkins Plugin] ALERTS COUNT [ 38 ]}} {{ [ZAP Jenkins Plugin] MESSAGES COUNT [ 25749 ]}} The "MESSAGES COUNT" comes from a call to the "/core/view/numberOfMessages" zaproxy API: [https://github.com/jenkinsci/zap-plugin/blob/zap-1.1.0/src/main/java/org/jenkinsci/plugins/zap/ZAPDriver.java#L2099] This API seems to be implemented by iterating over all the messages currently present in the database, using a "CounterProcessor": [https://github.com/zaproxy/zaproxy/blob/2.7.0/src/org/zaproxy/zap/extension/api/CoreAPI.java#L1005] [https://github.com/zaproxy/zaproxy/blob/2.7.0/src/org/zaproxy/zap/extension/api/CoreAPI.java#L1532 ] [https://github.com/zaproxy/zaproxy/blob/2.7.0/src/org/zaproxy/zap/extension/api/CoreAPI.java#L1575] When the HSQL DB gets big enough (when relevant data don't fit in the memory cache anymore, I assume), this naive implementation actually becomes a huge performance issue. I've monitored the zaproxy process with "iotop", and on my test case it shows ~8GB disk read during the job execution (for ~750MB disk write, and a final database size of ~1.2GB on disk). Here is now the same example log lines as above, with their actual timestamps (from timestamper plugin): {{00:22:10.300 00:22:10.302 [ZAP Jenkins Plugin] ACTIVE SCAN STATUS [ 88% ] 00:22:10.433 [ZAP Jenkins Plugin] ALERTS COUNT [ 38 ] 00:23:00.887 [ZAP Jenkins Plugin] MESSAGES COUNT [ 25749 ] 00:23:00.888 }} See the 50 seconds gap between two messages? :) I have rebuilt it without the "MESSAGES COUNT" messages and without the "numberOfMessages" API calls, and: - my test case job executes faster (average of ~20 minutes instead of ~28 minutes) - the disk read load has reduced a lot (~1GB instead of ~8GB - the remaining read load appears at the very end of the job execution, during zaproxy shutdown, when it deletes its temporary data from the DB, but that's a different story) A few words of context: I'm managing many (400+) Jenkins instances on a private cloud, with ~1000 VM. Last month, a single Jenkins slave attached to a project which started using ZAP plugin have been responsible for ~25% of the disk I/O of the whole platform. That's why I've looked into this a bit closer... I will make a PR for removing these "MESSAGES COUNT" logs in the plugin. Although this issue is, in the end, caused by a zaproxy bug, I really think your plugin should avoid hitting it so hard (also, because your plugin doesn't require a specific zaproxy version, any hypothetical fix on the zaproxy side could actually take quite some time to get effective). |
Description |
Original:
ZAP plugin displays some progress information during execution of the analysis, with messages like this, every 5 seconds: {{ [ZAP Jenkins Plugin] ACTIVE SCAN STATUS [ 88% ]}} {{ [ZAP Jenkins Plugin] ALERTS COUNT [ 38 ]}} {{ [ZAP Jenkins Plugin] MESSAGES COUNT [ 25749 ]}} The "MESSAGES COUNT" comes from a call to the "/core/view/numberOfMessages" zaproxy API: [https://github.com/jenkinsci/zap-plugin/blob/zap-1.1.0/src/main/java/org/jenkinsci/plugins/zap/ZAPDriver.java#L2099] This API seems to be implemented by iterating over all the messages currently present in the database, using a "CounterProcessor": [https://github.com/zaproxy/zaproxy/blob/2.7.0/src/org/zaproxy/zap/extension/api/CoreAPI.java#L1005] [https://github.com/zaproxy/zaproxy/blob/2.7.0/src/org/zaproxy/zap/extension/api/CoreAPI.java#L1532 ] [https://github.com/zaproxy/zaproxy/blob/2.7.0/src/org/zaproxy/zap/extension/api/CoreAPI.java#L1575] When the HSQL DB gets big enough (when relevant data don't fit in the memory cache anymore, I assume), this naive implementation actually becomes a huge performance issue. I've monitored the zaproxy process with "iotop", and on my test case it shows ~8GB disk read during the job execution (for ~750MB disk write, and a final database size of ~1.2GB on disk). Here is now the same example log lines as above, with their actual timestamps (from timestamper plugin): {{00:22:10.300 00:22:10.302 [ZAP Jenkins Plugin] ACTIVE SCAN STATUS [ 88% ] 00:22:10.433 [ZAP Jenkins Plugin] ALERTS COUNT [ 38 ] 00:23:00.887 [ZAP Jenkins Plugin] MESSAGES COUNT [ 25749 ] 00:23:00.888 }} See the 50 seconds gap between two messages? :) I have rebuilt it without the "MESSAGES COUNT" messages and without the "numberOfMessages" API calls, and: - my test case job executes faster (average of ~20 minutes instead of ~28 minutes) - the disk read load has reduced a lot (~1GB instead of ~8GB - the remaining read load appears at the very end of the job execution, during zaproxy shutdown, when it deletes its temporary data from the DB, but that's a different story) A few words of context: I'm managing many (400+) Jenkins instances on a private cloud, with ~1000 VM. Last month, a single Jenkins slave attached to a project which started using ZAP plugin have been responsible for ~25% of the disk I/O of the whole platform. That's why I've looked into this a bit closer... I will make a PR for removing these "MESSAGES COUNT" logs in the plugin. Although this issue is, in the end, caused by a zaproxy bug, I really think your plugin should avoid hitting it so hard (also, because your plugin doesn't require a specific zaproxy version, any hypothetical fix on the zaproxy side could actually take quite some time to get effective). |
New:
ZAP plugin displays some progress information during execution of the analysis, with messages like this, every 5 seconds: {code:java} [ZAP Jenkins Plugin] ACTIVE SCAN STATUS [ 88% ] [ZAP Jenkins Plugin] ALERTS COUNT [ 38 ] [ZAP Jenkins Plugin] MESSAGES COUNT [ 25749 ]{code} The "MESSAGES COUNT" comes from a call to the "/core/view/numberOfMessages" zaproxy API: [https://github.com/jenkinsci/zap-plugin/blob/zap-1.1.0/src/main/java/org/jenkinsci/plugins/zap/ZAPDriver.java#L2099] This API seems to be implemented by iterating over all the messages currently present in the database, using a "CounterProcessor": [https://github.com/zaproxy/zaproxy/blob/2.7.0/src/org/zaproxy/zap/extension/api/CoreAPI.java#L1005] [https://github.com/zaproxy/zaproxy/blob/2.7.0/src/org/zaproxy/zap/extension/api/CoreAPI.java#L1532 ] [https://github.com/zaproxy/zaproxy/blob/2.7.0/src/org/zaproxy/zap/extension/api/CoreAPI.java#L1575] When the HSQL DB gets big enough (when relevant data don't fit in the memory cache anymore, I assume), this naive implementation actually becomes a huge performance issue. I've monitored the zaproxy process with "iotop", and on my test case it shows ~8GB disk read during the job execution (for ~750MB disk write, and a final database size of ~1.2GB on disk). Here is now the same example log lines as above, with their actual timestamps (from timestamper plugin): {code:java} 00:22:10.300 00:22:10.302 [ZAP Jenkins Plugin] ACTIVE SCAN STATUS [ 88% ] 00:22:10.433 [ZAP Jenkins Plugin] ALERTS COUNT [ 38 ] 00:23:00.887 [ZAP Jenkins Plugin] MESSAGES COUNT [ 25749 ] 00:23:00.888{code} See the 50 seconds gap between two messages? :) I have rebuilt it without the "MESSAGES COUNT" messages and without the "numberOfMessages" API calls, and: - my test case job executes faster (average of ~20 minutes instead of ~28 minutes) - the disk read load has reduced a lot (~1GB instead of ~8GB - the remaining read load appears at the very end of the job execution, during zaproxy shutdown, when it deletes its temporary data from the DB, but that's a different story) A few words of context: I'm managing many (400+) Jenkins instances on a private cloud, with ~1000 VM. Last month, a single Jenkins slave attached to a project which started using ZAP plugin have been responsible for ~25% of the disk I/O of the whole platform. That's why I've looked into this a bit closer... I will make a PR for removing these "MESSAGES COUNT" logs in the plugin. Although this issue is, in the end, caused by a zaproxy bug, I really think your plugin should avoid hitting it so hard (also, because your plugin doesn't require a specific zaproxy version, any hypothetical fix on the zaproxy side could actually take quite some time to get effective). |