Skip to content

Commit e4f6838

Browse files
danelsontiffany76codebotenopentelemetrybotcartermp
authored
Collector internal telemetry updates (#4867)
Co-authored-by: Tiffany Hrabusa <[email protected]> Co-authored-by: Alex Boten <[email protected]> Co-authored-by: opentelemetrybot <[email protected]> Co-authored-by: Phillip Carter <[email protected]>
1 parent 1fe2d78 commit e4f6838

File tree

1 file changed

+13
-9
lines changed

1 file changed

+13
-9
lines changed

Diff for: content/en/docs/collector/internal-telemetry.md

+13-9
Original file line numberDiff line numberDiff line change
@@ -283,7 +283,8 @@ own telemetry.
283283

284284
#### Data loss
285285

286-
Use the rate of `otelcol_processor_dropped_spans > 0` and
286+
Use the rate of `otelcol_processor_dropped_log_records > 0`,
287+
`otelcol_processor_dropped_spans > 0`, and
287288
`otelcol_processor_dropped_metric_points > 0` to detect data loss. Depending on
288289
your project's requirements, select a narrow time window before alerting begins
289290
to avoid notifications for small losses that are within the desired reliability
@@ -317,19 +318,22 @@ logs for messages such as `Dropping data because sending_queue is full`.
317318

318319
#### Receive failures
319320

320-
Sustained rates of `otelcol_receiver_refused_spans` and
321-
`otelcol_receiver_refused_metric_points` indicate that too many errors were
322-
returned to clients. Depending on the deployment and the clients' resilience,
323-
this might indicate clients' data loss.
321+
Sustained rates of `otelcol_receiver_refused_log_records`,
322+
`otelcol_receiver_refused_spans`, and `otelcol_receiver_refused_metric_points`
323+
indicate that too many errors were returned to clients. Depending on the
324+
deployment and the clients' resilience, this might indicate clients' data loss.
324325

325-
Sustained rates of `otelcol_exporter_send_failed_spans` and
326+
Sustained rates of `otelcol_exporter_send_failed_log_records`,
327+
`otelcol_exporter_send_failed_spans`, and
326328
`otelcol_exporter_send_failed_metric_points` indicate that the Collector is not
327329
able to export data as expected. These metrics do not inherently imply data loss
328330
since there could be retries. But a high rate of failures could indicate issues
329331
with the network or backend receiving the data.
330332

331333
#### Data flow
332334

333-
You can monitor data ingress with the `otelcol_receiver_accepted_spans` and
334-
`otelcol_receiver_accepted_metric_points` metrics and data egress with the
335-
`otelcol_exporter_sent_spans` and `otelcol_exporter_sent_metric_points` metrics.
335+
You can monitor data ingress with the `otelcol_receiver_accepted_log_records`,
336+
`otelcol_receiver_accepted_spans`, and `otelcol_receiver_accepted_metric_points`
337+
metrics and data egress with the `otelcol_exporter_sent_log_records`,
338+
`otelcol_exporter_sent_spans`, and `otelcol_exporter_sent_metric_points`
339+
metrics.

0 commit comments

Comments
 (0)