|
1 |
| -# Performance Benchmark of OpenTelemetry API |
2 |
| - |
3 |
| -This document describes common performance benchmark guidelines on how to |
4 |
| -measure and report the performance of OpenTelemetry SDKs. |
5 |
| - |
6 |
| -The goal of this benchmark is to provide a tool to get the basic performance |
7 |
| -overhead of the OpenTelemetry SDK for given events throughput on the target |
8 |
| -platform. |
9 |
| - |
10 |
| -## Benchmark Configuration |
11 |
| - |
12 |
| -### Span Configuration |
13 |
| - |
14 |
| -- No parent `Span` and `SpanContext`. |
15 |
| -- Default Span [Kind](./trace/api.md#spankind) and |
16 |
| - [Status](./trace/api.md#set-status). |
17 |
| -- Associated to a [resource](overview.md#resources) with attributes |
18 |
| - `service.name`, `service.version` and 10 characters string value for each |
19 |
| - attribute, and attribute `service.instance.id` with a unique UUID. See |
20 |
| - [Service](./resource/semantic_conventions/README.md#service) for details. |
21 |
| -- 1 [attribute](./common/common.md#attributes) with a signed 64-bit integer |
22 |
| - value. |
23 |
| -- 1 [event](./trace/api.md#add-events) without any attributes. |
24 |
| -- The `AlwaysOn` sampler should be enabled. |
25 |
| -- Each `Span` is created and immediately ended. |
26 |
| - |
27 |
| -### Measurement Configuration |
28 |
| - |
29 |
| -For the languages with bootstrap cost like JIT compilation, a warm-up phase is |
30 |
| -recommended to take place before the measurement, which runs under the same |
31 |
| -`Span` [configuration](#span-configuration). |
32 |
| - |
33 |
| -## Throughput Measurement |
34 |
| - |
35 |
| -### Create Spans |
36 |
| - |
37 |
| -Number of spans which could be created and exported via OTLP exporter in 1 |
38 |
| -second per logical core and average number over all logical cores, with each |
39 |
| -span containing 10 attributes, and each attribute containing two 20 characters |
40 |
| -strings, one as attribute name the other as value. |
41 |
| - |
42 |
| -## Instrumentation Cost |
43 |
| - |
44 |
| -### CPU Usage Measurement |
45 |
| - |
46 |
| -With given number of span throughput specified by user, or 10,000 spans per |
47 |
| -second as default if user does not input the number, measure and report the CPU |
48 |
| -usage for SDK with both default configured simple and batching span processors |
49 |
| -together with OTLP exporter. The benchmark should create an out-of-process OTLP |
50 |
| -receiver which listens on the exporting target or adopts existing OTLP exporter |
51 |
| -which runs out-of-process, responds with success status immediately and drops |
52 |
| -the data. The collector should not add significant CPU overhead to the |
53 |
| -measurement. Because the benchmark does not include user processing logic, the |
54 |
| -total CPU consumption of benchmark program could be considered as approximation |
55 |
| -of SDK's CPU consumption. |
56 |
| - |
57 |
| -The total running time for one test iteration is suggested to be at least 15 |
58 |
| -seconds. The average and peak CPU usage should be reported. |
59 |
| - |
60 |
| -### Memory Usage Measurement |
61 |
| - |
62 |
| -Measure dynamic memory consumption, e.g. heap, for the same scenario as above |
63 |
| -CPU Usage section with 15 seconds duration. |
64 |
| - |
65 |
| -## Report |
66 |
| - |
67 |
| -### Report Format |
68 |
| - |
69 |
| -All the numbers above should be measured multiple times (suggest 10 times at |
70 |
| -least) and reported. |
| 1 | +# Performance Benchmark of OpenTelemetry API |
| 2 | + |
| 3 | +This document describes common performance benchmark guidelines on how to |
| 4 | +measure and report the performance of OpenTelemetry SDKs. |
| 5 | + |
| 6 | +The goal of this benchmark is to provide a tool to get the basic performance |
| 7 | +overhead of the OpenTelemetry SDK for given events throughput on the target |
| 8 | +platform. |
| 9 | + |
| 10 | +## Benchmark Configuration |
| 11 | + |
| 12 | +### Span Configuration |
| 13 | + |
| 14 | +- No parent `Span` and `SpanContext`. |
| 15 | +- Default Span [Kind](./trace/api.md#spankind) and |
| 16 | + [Status](./trace/api.md#set-status). |
| 17 | +- Associated to a [resource](overview.md#resources) with attributes |
| 18 | + `service.name`, `service.version` and 10 characters string value for each |
| 19 | + attribute, and attribute `service.instance.id` with a unique UUID. See |
| 20 | + [Service](./resource/semantic_conventions/README.md#service) for details. |
| 21 | +- 1 [attribute](./common/common.md#attributes) with a signed 64-bit integer |
| 22 | + value. |
| 23 | +- 1 [event](./trace/api.md#add-events) without any attributes. |
| 24 | +- The `AlwaysOn` sampler should be enabled. |
| 25 | +- Each `Span` is created and immediately ended. |
| 26 | + |
| 27 | +### Measurement Configuration |
| 28 | + |
| 29 | +For the languages with bootstrap cost like JIT compilation, a warm-up phase is |
| 30 | +recommended to take place before the measurement, which runs under the same |
| 31 | +`Span` [configuration](#span-configuration). |
| 32 | + |
| 33 | +## Throughput Measurement |
| 34 | + |
| 35 | +### Create Spans |
| 36 | + |
| 37 | +Number of spans which could be created and exported via OTLP exporter in 1 |
| 38 | +second per logical core and average number over all logical cores, with each |
| 39 | +span containing 10 attributes, and each attribute containing two 20 characters |
| 40 | +strings, one as attribute name the other as value. |
| 41 | + |
| 42 | +## Instrumentation Cost |
| 43 | + |
| 44 | +### CPU Usage Measurement |
| 45 | + |
| 46 | +With given number of span throughput specified by user, or 10,000 spans per |
| 47 | +second as default if user does not input the number, measure and report the CPU |
| 48 | +usage for SDK with both default configured simple and batching span processors |
| 49 | +together with OTLP exporter. The benchmark should create an out-of-process OTLP |
| 50 | +receiver which listens on the exporting target or adopts existing OTLP exporter |
| 51 | +which runs out-of-process, responds with success status immediately and drops |
| 52 | +the data. The collector should not add significant CPU overhead to the |
| 53 | +measurement. Because the benchmark does not include user processing logic, the |
| 54 | +total CPU consumption of benchmark program could be considered as approximation |
| 55 | +of SDK's CPU consumption. |
| 56 | + |
| 57 | +The total running time for one test iteration is suggested to be at least 15 |
| 58 | +seconds. The average and peak CPU usage should be reported. |
| 59 | + |
| 60 | +### Memory Usage Measurement |
| 61 | + |
| 62 | +Measure dynamic memory consumption, e.g. heap, for the same scenario as above |
| 63 | +CPU Usage section with 15 seconds duration. |
| 64 | + |
| 65 | +## Report |
| 66 | + |
| 67 | +### Report Format |
| 68 | + |
| 69 | +All the numbers above should be measured multiple times (suggest 10 times at |
| 70 | +least) and reported. |
0 commit comments