The Telemetry gateways in Kyma take care of data enrichment, filtering, and dispatching, as well as native support for Istio communication.
Both, the traces and the metrics feature, are based on a gateway, which is provisioned as soon as you define any pipeline resource. All telemetry data of the related domain passes the gateway, so it acts as a central point and provides the following benefits:
- Data Enrichment to achieve a certain data quality
- Filtering to apply namespace filtering and remove noisy system data (individually for logs, traces, and metrics)
- Dispatching to the configured backends (individually for logs, traces, and metrics)
When the Istio module is added to your Kyma cluster, the gateways support mTLS for the communication from the workload to the gateway, as well as for communication to backends running in the cluster. For details, see Istio Support.
The gateways are based on the OTel Collector and come with a concept of pipelines consisting of receivers, processors, and exporters, with which you can flexibly plug pipelines together (see Configuration). Kyma's MetricPipeline provides a hardened setup of an OTel Collector and also abstracts the underlying pipeline concept. Such abstraction has the following benefits:
- Compatibility: An abstraction layer supports compatibility when underlying features change.
- Migratability: Smooth migration experiences when switching underlying technologies or architectures.
- Native Kubernetes support: API provided by Kyma supports an easy integration with Secrets, for example, served by the SAP BTP Service Operator. Telemetry Manager takes care of the full lifecycle.
- Focus: The user doesn't need to understand the underlying concepts.
You can set up a pipeline with a backend that subsequently instantiates a gateway. For details, see Traces and Metrics. To see whether you've set up your gateways and their push endpoints successfully, check the status of the default Telemetry resource:
kubectl -n kyma-system get telemetries.operator.kyma-project.io default -oyaml
In the status of the returned resource, you see the pipeline health as well as the available push endpoints:
endpoints:
metrics:
grpc: http://telemetry-otlp-metrics.kyma-system:4317
http: http://telemetry-otlp-metrics.kyma-system:4318
traces:
grpc: http://telemetry-otlp-traces.kyma-system:4317
http: http://telemetry-otlp-traces.kyma-system:4318
For every signal type, there's a dedicated endpoint to which you can push data using OTLP. OTLP supports GRPC and HTTP-based communication, each having its individual port on every endpoint. Use port 4317
for GRPC and 4318
for HTTP.
Applications that support OTLP typically use the OTel SDK for instrumentation of the data. You can either configure the endpoints hardcoded in the SDK setup, or you use standard environment variables configuring the OTel exporter, for example:
- Traces GRPC:
export OTEL_EXPORTER_OTLP_TRACES_ENDPOINT="http://telemetry-otlp-traces.kyma-system:4317"
- Traces HTTP:
export OTEL_EXPORTER_OTLP_TRACES_ENDPOINT="http://telemetry-otlp-traces.kyma-system:4318/v1/traces"
- Metrics GRPC:
export OTEL_EXPORTER_OTLP_METRICS_ENDPOINT="http://telemetry-otlp-metrics.kyma-system:4317"
- Metrics HTTP:
export OTEL_EXPORTER_OTLP_METRICS_ENDPOINT="http://telemetry-otlp-metrics.kyma-system:4318/v1/metrics"
The Telemetry gateways automatically enrich your data by adding the following attributes:
service.name
: The logical name of the service that emits the telemetry data. The gateway ensures that this attribute always has a valid value. If not provided by the user, or if its value follows the patternunknown_service:<process.executable.name>
as described in the specification, then it is generated from Kubernetes metadata. The gateway determines the service name based on the following hierarchy of labels and names:app.kubernetes.io/name
: Pod label valueapp
: Pod label value- Deployment/DaemonSet/StatefulSet/Job name
- Pod name
- If none of the above is available, the value is
unknown_service
k8s.*
attributes: These attributes encapsulate various pieces of Kubernetes metadata associated with the Pod, including, but not limited to:- Pod name
- Deployment/DaemonSet/StatefulSet/Job name
- Namespace
- Cluster name
- Cloud provider attributes: If data is available, the gateway automatically adds cloud provider attributes to the telemetry data.
cloud.provider
: Cloud provider namecloud.region
: Region where the Node runs (from Node labeltopology.kubernetes.io/region
)cloud.availability_zone
: Zone where the Node runs (from Node labeltopology.kubernetes.io/zone
)
- Host attributes: If data is available, the gateway automatically adds host attributes to the telemetry data:
host.type
: Machine type of the Node (from Node labelnode.kubernetes.io/instance-type
)host.arch
: CPU architecture of the system the Node is running on (from Node labelkubernetes.io/arch
)
The Telemetry module automatically detects whether the Istio module is added to your cluster, and injects Istio sidecars to the Telemetry components. Additionally, the ingestion endpoints of gateways are configured to allow traffic in the permissive mode, so they accept mTLS-based communication as well as plain text.
Clients in the Istio service mesh transparently communicate to the gateway with mTLS. Clients that don't use Istio can communicate with the gateway in plain text mode. The same pattern applies for the communication to the backends running in the cluster. External clusters use the configuration as specified in the pipelines output section.