|
| 1 | +# Writing Benchmarks |
| 2 | + |
| 3 | +If you're familiar with the Java Microbenchmark Harness (JMH) toolkit, you'll find that the `kotlinx-benchmark` |
| 4 | +library shares a similar approach to crafting benchmarks. This compatibility allows you to seamlessly run your |
| 5 | +JMH benchmarks written in Kotlin on various platforms with minimal, if any, modifications. |
| 6 | + |
| 7 | +Like JMH, kotlinx-benchmark is annotation-based, meaning you configure benchmark execution behavior using annotations. |
| 8 | +The library then extracts metadata provided through annotations to generate code that benchmarks the specified code |
| 9 | +in the desired manner. |
| 10 | + |
| 11 | +To get started, let's examine a simple example of a multiplatform benchmark: |
| 12 | + |
| 13 | +```kotlin |
| 14 | +import kotlinx.benchmark.* |
| 15 | + |
| 16 | +@BenchmarkMode(Mode.AverageTime) |
| 17 | +@OutputTimeUnit(BenchmarkTimeUnit.MILLISECONDS) |
| 18 | +@Warmup(iterations = 10, time = 500, timeUnit = BenchmarkTimeUnit.MILLISECONDS) |
| 19 | +@Measurement(iterations = 20, time = 1, timeUnit = BenchmarkTimeUnit.SECONDS) |
| 20 | +@State(Scope.Benchmark) |
| 21 | +class ExampleBenchmark { |
| 22 | + |
| 23 | + // Parameterizes the benchmark to run with different list sizes |
| 24 | + @Param("4", "10") |
| 25 | + var size: Int = 0 |
| 26 | + |
| 27 | + private val list = ArrayList<Int>() |
| 28 | + |
| 29 | + // Prepares the test environment before each benchmark run |
| 30 | + @Setup |
| 31 | + fun prepare() { |
| 32 | + for (i in 0..<size) { |
| 33 | + list.add(i) |
| 34 | + } |
| 35 | + } |
| 36 | + |
| 37 | + // Cleans up resources after each benchmark run |
| 38 | + @TearDown |
| 39 | + fun cleanup() { |
| 40 | + list.clear() |
| 41 | + } |
| 42 | + |
| 43 | + // The actual benchmark method |
| 44 | + @Benchmark |
| 45 | + fun benchmarkMethod(): Int { |
| 46 | + return list.sum() |
| 47 | + } |
| 48 | +} |
| 49 | +``` |
| 50 | + |
| 51 | +**Example Description**: |
| 52 | +This example tests the speed of summing numbers in an `ArrayList`. We evaluate this operation with lists |
| 53 | +of 4 and 10 numbers to understand the method's performance with different list sizes. |
| 54 | + |
| 55 | +## Explaining the Annotations |
| 56 | + |
| 57 | +The following annotations are available to define and fine-tune your benchmarks. |
| 58 | + |
| 59 | +### @State |
| 60 | + |
| 61 | +The `@State` annotation specifies the extent to which the state object is shared among the worker threads, |
| 62 | +and it is mandatory for benchmark classes to be marked with this annotation to define their scope of state sharing. |
| 63 | + |
| 64 | +Currently, multi-threaded execution of a benchmark method is supported only on the JVM, where you can specify various scopes. |
| 65 | +Refer to [JMH documentation of Scope](https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/Scope.html) |
| 66 | +for details about available scopes and their implications. |
| 67 | +In non-JVM targets, only `Scope.Benchmark` is applicable. |
| 68 | + |
| 69 | +When writing JVM-only benchmarks, benchmark classes are not required to be annotated with `@State`. |
| 70 | +Refer to [JMH documentation of @State](https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/State.html) |
| 71 | +for details about the effect and restrictions of the annotation in Kotlin/JVM. |
| 72 | + |
| 73 | +In our snippet, the `ExampleBenchmark` class is annotated with `@State(Scope.Benchmark)`, |
| 74 | +indicating the state is shared across all worker threads. |
| 75 | + |
| 76 | +### @Setup |
| 77 | + |
| 78 | +The `@Setup` annotation marks a method that sets up the necessary preconditions for your benchmark test. |
| 79 | +It serves as a preparatory step where you initiate the benchmark environment. |
| 80 | + |
| 81 | +The setup method is executed once before the entire set of iterations for a benchmark method begins. |
| 82 | +In Kotlin/JVM, you can specify when the setup method should be executed, e.g., `@Setup(Level.Iteration)`. |
| 83 | +Refer to [JMH documentation of Level](https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/Level.html) |
| 84 | +for details about available levels in Kotlin/JVM. |
| 85 | + |
| 86 | +The key point to remember is that the `@Setup` method's execution time is not included in the final benchmark |
| 87 | +results - the timer starts only when the `@Benchmark` method begins. This makes `@Setup` an ideal place |
| 88 | +for initialization tasks that should not impact the timing results of your benchmark. |
| 89 | + |
| 90 | +Refer to [JMH documentation of @Setup](https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/Setup.html) |
| 91 | +for details about the effect and restrictions of the annotation in Kotlin/JVM. |
| 92 | + |
| 93 | +In the provided example, the `@Setup` annotation is used to populate an `ArrayList` with integers from `0` up to a specified `size`. |
| 94 | + |
| 95 | +### @TearDown |
| 96 | + |
| 97 | +The `@TearDown` annotation is used to denote a method that resets and cleans up the benchmarking environment. |
| 98 | +It is chiefly responsible for the cleanup or deallocation of resources and conditions set up in the `@Setup` method. |
| 99 | + |
| 100 | +The teardown method is executed once after the entire iteration set of a benchmark method. |
| 101 | +In Kotlin/JVM, you can specify when the teardown method should be executed, e.g., `@TearDown(Level.Iteration)`. |
| 102 | +Refer to [JMH documentation of Level](https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/Level.html) |
| 103 | +for details about available levels in Kotlin/JVM. |
| 104 | + |
| 105 | +The `@TearDown` annotation is crucial for avoiding performance bias, ensuring the proper maintenance of resources, |
| 106 | +and preparing a clean environment for the next run. Similar to the `@Setup` method, the execution time of the |
| 107 | +`@TearDown` method is not included in the final benchmark results. |
| 108 | + |
| 109 | +Refer to [JMH documentation of @TearDown](https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/TearDown.html) |
| 110 | +for more information on the effect and restrictions of the annotation in Kotlin/JVM. |
| 111 | + |
| 112 | +In our example, the `cleanup` function annotated with `@TearDown` is used to clear our `ArrayList`. |
| 113 | + |
| 114 | +### @Benchmark |
| 115 | + |
| 116 | +The `@Benchmark` annotation is used to specify the methods that you want to measure the performance of. |
| 117 | +It's the actual test you're running. The code you want to benchmark goes inside this method. |
| 118 | +All other annotations are employed to configure the benchmark's environment and execution. |
| 119 | + |
| 120 | +Benchmark methods may include only a single [Blackhole](#blackhole) type as an argument, or have no arguments at all. |
| 121 | +It's important to note that in Kotlin/JVM benchmark methods must always be `public`. |
| 122 | +Refer to [JMH documentation of @Benchmark](https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/Benchmark.html) |
| 123 | +for details about restrictions for benchmark methods in Kotlin/JVM. |
| 124 | + |
| 125 | +In our example, the `benchmarkMethod` function is annotated with `@Benchmark`, |
| 126 | +which means the toolkit will measure the performance of the operation of summing all the integers in the list. |
| 127 | + |
| 128 | +### @BenchmarkMode |
| 129 | + |
| 130 | +The `@BenchmarkMode` annotation sets the mode of operation for the benchmark. |
| 131 | + |
| 132 | +Applying the `@BenchmarkMode` annotation requires specifying a mode from the `Mode` enum. |
| 133 | +`Mode.Throughput` measures the raw throughput of your code in terms of the number of operations it can perform per unit |
| 134 | +of time, such as operations per second. `Mode.AverageTime` is used when you're more interested in the average time it |
| 135 | +takes to execute an operation. Without an explicit `@BenchmarkMode` annotation, the toolkit defaults to `Mode.Throughput`. |
| 136 | +In Kotlin/JVM, the `Mode` enum has a few more options, including `SingleShotTime`. |
| 137 | +Refer to [JMH documentation of Mode](https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/Mode.html) |
| 138 | +for details about available options in Kotlin/JVM. |
| 139 | + |
| 140 | +The annotation is put at the enclosing class and has the effect over all `@Benchmark` methods in the class. |
| 141 | +In Kotlin/JVM, it may be put at `@Benchmark` method to have effect on that method only. |
| 142 | +Refer to [JMH documentation of @BenchmarkMode](https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/BenchmarkMode.html) |
| 143 | +for details about the effect of the annotation in Kotlin/JVM. |
| 144 | + |
| 145 | +In our example, `@BenchmarkMode(Mode.AverageTime)` is used, indicating that the benchmark aims to measure the |
| 146 | +average execution time of the benchmark method. |
| 147 | + |
| 148 | +### @OutputTimeUnit |
| 149 | + |
| 150 | +The `@OutputTimeUnit` annotation specifies the time unit in which your results will be presented. |
| 151 | +This time unit can range from minutes to nanoseconds. If a piece of code executes within a few milliseconds, |
| 152 | +presenting the result in nanoseconds or microseconds provides a more accurate and detailed measurement. |
| 153 | +Conversely, for operations with longer execution times, you might choose to display the output in milliseconds, seconds, or even minutes. |
| 154 | +Essentially, the `@OutputTimeUnit` annotation enhances the readability and interpretability of benchmark results. |
| 155 | +By default, if the annotation is not specified, results are presented in seconds. |
| 156 | + |
| 157 | +The annotation is put at the enclosing class and has the effect over all `@Benchmark` methods in the class. |
| 158 | +In Kotlin/JVM, it may be put at `@Benchmark` method to have effect on that method only. |
| 159 | +Refer to [JMH documentation of @OutputTimeUnit](https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/OutputTimeUnit.html) |
| 160 | +for details about the effect of the annotation in Kotlin/JVM. |
| 161 | + |
| 162 | +In our example, the `@OutputTimeUnit` is set to milliseconds. |
| 163 | + |
| 164 | +### @Warmup |
| 165 | + |
| 166 | +The `@Warmup` annotation specifies a preliminary phase before the actual benchmarking takes place. |
| 167 | +During this warmup phase, the code in your `@Benchmark` method is executed several times, but these runs aren't included |
| 168 | +in the final benchmark results. The primary purpose of the warmup phase is to let the system "warm up" and reach its |
| 169 | +optimal performance state so that the results of measurement iterations are more stable. |
| 170 | + |
| 171 | +The annotation is put at the enclosing class and has the effect over all `@Benchmark` methods in the class. |
| 172 | +In Kotlin/JVM, it may be put at `@Benchmark` method to have effect on that method only. |
| 173 | +Refer to [JMH documentation of @Warmup](https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/Warmup.html) |
| 174 | +for details about the effect of the annotation in Kotlin/JVM. |
| 175 | + |
| 176 | +In our example, the `@Warmup` annotation is used to allow 10 iterations of executing the benchmark method before |
| 177 | +the actual measurement starts. Each iteration lasts 500 milliseconds. |
| 178 | + |
| 179 | +### @Measurement |
| 180 | + |
| 181 | +The `@Measurement` annotation controls the properties of the actual benchmarking phase. |
| 182 | +It sets how many iterations the benchmark method is run and how long each run should last. |
| 183 | +The results from these runs are recorded and reported as the final benchmark results. |
| 184 | + |
| 185 | +The annotation is put at the enclosing class and has the effect over all `@Benchmark` methods in the class. |
| 186 | +In Kotlin/JVM, it may be put at `@Benchmark` method to have effect on that method only. |
| 187 | +Refer to [JMH documentation of @Measurement](https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/Measurement.html) |
| 188 | +for details about the effect of the annotation in Kotlin/JVM. |
| 189 | + |
| 190 | +In our example, the `@Measurement` annotation specifies that the benchmark method will run 20 iterations, |
| 191 | +with each iteration lasting one second, for the final performance measurement. |
| 192 | + |
| 193 | +### @Param |
| 194 | + |
| 195 | +The `@Param` annotation is used to pass different parameters to your benchmark method. |
| 196 | +It allows you to run the same benchmark method with different input values, so you can see how these variations affect |
| 197 | +performance. The values you provide for the `@Param` annotation are the different inputs you want to use in your |
| 198 | +benchmark test. The benchmark will run once for each provided value. |
| 199 | + |
| 200 | +The property marked with this annotation must be mutable (`var`) and not `private.` |
| 201 | +Additionally, only properties of primitive types or the `String` type can be annotated with `@Param`. |
| 202 | +Refer to [JMH documentation of @Param](https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/Param.html) |
| 203 | +for details about the effect and restrictions of the annotation in Kotlin/JVM. |
| 204 | + |
| 205 | +In our example, the `@Param` annotation is used with values `"4"` and `"10"`, meaning the `benchmarkMethod` |
| 206 | +will be benchmarked twice - once with the `size` value set to `4` and then with `10`. |
| 207 | +This approach helps in understanding how the input list's size affects the time taken to sum its integers. |
| 208 | + |
| 209 | +### Other JMH annotations |
| 210 | + |
| 211 | +In Kotlin/JVM, you can use annotations provided by JMH to further tune your benchmarks execution behavior. |
| 212 | +Refer to [JMH documentation](https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/package-summary.html) |
| 213 | +for available annotations. |
| 214 | + |
| 215 | +## Blackhole |
| 216 | + |
| 217 | +Modern compilers often eliminate computations they find unnecessary, which can distort benchmark results. |
| 218 | +In essence, `Blackhole` maintains the integrity of benchmarks by preventing unwanted optimizations such as dead-code |
| 219 | +elimination by the compiler or the runtime virtual machine. A `Blackhole` should be used when the benchmark produces several values. |
| 220 | +If the benchmark produces a single value, just return it. It will be implicitly consumed by a `Blackhole`. |
| 221 | + |
| 222 | +### How to Use Blackhole: |
| 223 | + |
| 224 | +Inject `Blackhole` into your benchmark method and use it to consume results of your computations: |
| 225 | + |
| 226 | +```kotlin |
| 227 | +@Benchmark |
| 228 | +fun iterateBenchmark(bh: Blackhole) { |
| 229 | + for (e in myList) { |
| 230 | + bh.consume(e) |
| 231 | + } |
| 232 | +} |
| 233 | +``` |
| 234 | + |
| 235 | +By consuming results, you signal to the compiler that these computations are significant and shouldn't be optimized away. |
| 236 | + |
| 237 | +For a deeper dive into `Blackhole` and its nuances in JVM, you can refer to: |
| 238 | +- [Official Javadocs](https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/infra/Blackhole.html) |
| 239 | +- [JMH](https://github.com/openjdk/jmh/blob/1.37/jmh-core/src/main/java/org/openjdk/jmh/infra/Blackhole.java#L157-L254) |
0 commit comments