@@ -40,12 +40,13 @@ The SDK is used to configure what happens with the data collected by the API.
40
40
This typically includes processing it and exporting it out of process for
41
41
analysis, often to an observability platform.
42
42
43
- The API entry point for metrics is the [ meter provider] [ ] . It provides meters for
44
- different scopes, where a scope is just a logical unit of application code. For example,
45
- instrumentation for an HTTP client library would have a different scope and therefore
46
- a different meter than instrumentation for a database client library. You use meters
47
- to obtain instruments. You use instruments to report measurements, which consist
48
- of a value and set of attributes. This Java code snippet demonstrates the workflow:
43
+ The API entry point for metrics is the [ meter provider] [ ] . It provides meters
44
+ for different scopes, where a scope is just a logical unit of application code.
45
+ For example, instrumentation for an HTTP client library would have a different
46
+ scope and therefore a different meter than instrumentation for a database client
47
+ library. You use meters to obtain instruments. You use instruments to report
48
+ measurements, which consist of a value and set of attributes. This Java code
49
+ snippet demonstrates the workflow:
49
50
50
51
``` java
51
52
OpenTelemetry openTelemetry = // declare OpenTelemetry instance
@@ -73,7 +74,8 @@ and when the sum of the things is more important than their individual values
73
74
the distribution of measurements is relevant for analysis. For example, a
74
75
histogram is a natural choice for tracking response times for HTTP servers,
75
76
because it's useful to analyze the distribution of response times to evaluate
76
- SLAs and identify trends. To learn more, see the guidelines for [ instrument selection] [ ] .
77
+ SLAs and identify trends. To learn more, see the guidelines for [ instrument
78
+ selection] [ ] .
77
79
78
80
I mentioned earlier that the SDK aggregates measurements from instruments. Each
79
81
instrument type has a default aggregation strategy (or simply [ aggregation] [ ] )
@@ -125,10 +127,10 @@ request, you can determine:
125
127
requests resolve quickly but a small number of requests take a long time and
126
128
bring down the average.
127
129
128
- The second type of OpenTelemetry histogram is the [ exponential
129
- bucket histogram] [ ] . Exponential bucket histograms have buckets and bucket
130
- counts, but instead of explicitly defining the bucket boundaries, the boundaries
131
- are computed based on an exponential scale. More specifically, each bucket is
130
+ The second type of OpenTelemetry histogram is the [ exponential bucket
131
+ histogram] [ ] . Exponential bucket histograms have buckets and bucket counts, but
132
+ instead of explicitly defining the bucket boundaries, the boundaries are
133
+ computed based on an exponential scale. More specifically, each bucket is
132
134
defined by an index _ i_ and has bucket boundaries _ (base\*\* i, base\*\* (i+1)] _ ,
133
135
where _ base\*\* i_ means that _ base_ is raised to the power of _ i_ . The base is
134
136
derived from a scale factor that is adjustable to reflect the range of reported
@@ -197,13 +199,14 @@ large range of measurement values.
197
199
198
200
Let's bring everything together with a proper demonstration comparing explicit
199
201
bucket histograms to exponential bucket histograms. I've put together some
200
- [ example code] [ ] that simulates tracking response time to an HTTP server in milliseconds.
201
- It records one million samples to an explicit bucket histogram with the default buckets,
202
- and to an exponential bucket histogram with a number of buckets that produces roughly
203
- the same size of [ OTLP] [ ] -encoded, Gzip-compressed payload as the explicit bucket
204
- defaults. Through trial and error, I determined that ~ 40 exponential buckets produce
205
- an equivalent payload size to the default explicit bucket histogram with 11 buckets.
206
- (Your results may vary.)
202
+ [ example code] [ ] that simulates tracking response time to an HTTP server in
203
+ milliseconds. It records one million samples to an explicit bucket histogram
204
+ with the default buckets, and to an exponential bucket histogram with a number
205
+ of buckets that produces roughly the same size of [ OTLP] [ ] -encoded,
206
+ Gzip-compressed payload as the explicit bucket defaults. Through trial and
207
+ error, I determined that ~ 40 exponential buckets produce an equivalent payload
208
+ size to the default explicit bucket histogram with 11 buckets. (Your results may
209
+ vary.)
207
210
208
211
I wanted the distribution of samples to reflect what we might see in an actual
209
212
HTTP server, with bands of response times corresponding to different operations.
0 commit comments