renovate[bot] 043734ead4 chore(deps): update dependency gradle to v8.8 (main) (#11510) | 6 luni în urmă | |
---|---|---|
.. | ||
gradle | 6 luni în urmă | |
k6 | 2 ani în urmă | |
src | 1 an în urmă | |
.dockerignore | 3 ani în urmă | |
Dockerfile-petclinic-base | 8 luni în urmă | |
README.md | 1 an în urmă | |
build.gradle.kts | 7 luni în urmă | |
gradlew | 6 luni în urmă | |
gradlew.bat | 9 luni în urmă | |
settings.gradle.kts | 3 ani în urmă |
This directory will contain tools and utilities that help us to measure the performance overhead introduced by the agent and to measure how this overhead changes over time.
The overhead tests here should be considered a "macro" benchmark. They serve to measure high-level overhead as perceived by the operator of a "typical" application. Tests are performed on a Java 11 distribution from Eclipse Temurin.
There is one dynamic test here called OverheadTests.
The @TestFactory
method creates a test pass for each of the defined configurations.
Before the tests run, a single collector instance is started. Each test pass has one or more agents configured and those are tested in series.
For each agent defined in a configuration, the test runner (using testcontainers) will:
jcmd
inside the petclinic containerAnd this repeats for every agent configured in each test configuration.
After all the tests are complete, the results are collected and committed back to the /results
subdirectory as csv and summary text files.
For each test pass, we record the following metrics in order to compare agents and determine relative overhead.
metric name | units | description |
---|---|---|
Startup time | ms | How long it takes for the spring app to report "healthy" |
Total allocated mem | bytes | Across the life of the application |
Heap (min) | bytes | Smallest observed heap size |
Heap (max) | bytes | Largest observed heap size |
Thread switch rate | # / s | Max observed thread context switch rate |
GC time | ms | Total amount of time spent paused for garbage collection |
Request mean | ms | Average time to handle a single web request (measured at the caller) |
Request p95 | ms | 95th percentile time to handle a single web requ4st (measured at the caller) |
Iteration mean | ms | average time to do a single pass through the k6 test script |
Iteration p95 | ms | 95th percentile time to do a single pass through the k6 test script |
Peak threads | # | Highest number of running threads in the VM, including agent threads |
Network read mean | bits/s | Average network read rate |
Network write mean | bits/s | Average network write rate |
Average JVM user CPU | % | Average observed user CPU (range 0.0-1.0) |
Max JVM user CPU | % | Max observed user CPU used (range 0.0-1.0) |
Average machine tot. CPU | % | Average percentage of machine CPU used (range 0.0-1.0) |
Total GC pause nanos | ns | JVM time spent paused due to GC |
Run duration ms | ms | Duration of the test run, in ms |
Each config contains the following:
Currently, we test:
Additional configurations can be created by submitting a PR against the Configs
class.
An agent is defined in code as a name, description, optional URL, and optional additional
arguments to be passed to the JVM (not including -javaagent:
). New agents may be defined
by creating new instances of the Agent
class. The AgentResolver
is used to download
the relevant agent jar for an Agent
definition.
The tests are run nightly via github actions. The results are collected and appended to
a csv file, which is committed back to the repo in the /results
subdirectory.
The tests require docker to be running. Simply run OverheadTests
in your IDE.
Alternatively, you can run the tests from the command line with gradle:
cd benchmark-overhead
./gradlew test
None yet. Help wanted! Our goal is to have the results and a rich UI running in the
gh-pages
branch similar to earlier tools.
Please help us make this happen.