Category | |
---|---|
Signal types | traces, metrics |
Backend type | custom in-cluster, third-party remote |
OTLP-native | yes |
Learn how to install the OpenTelemetry demo application in a Kyma cluster using a provided Helm chart. The demo application will be configured to push trace data using OTLP to the collector that's provided by Kyma, so that they are collected together with the related Istio trace data.
- Kyma as the target deployment environment
- The Telemetry module is added
- The Telemetry module is configured with pipelines for traces and metrics, for example, by following the SAP CLoud Logging guide or Prometheus and Loki
- Istio Tracing is enabled
- Kubectl version that is within one minor version (older or newer) of
kube-apiserver
- Helm 3.x
-
Export your namespace as a variable with the following command:
export K8S_NAMESPACE="otel"
-
If you haven't created a Namespace yet, do it now:
kubectl create namespace $K8S_NAMESPACE
-
To enable Istio injection in your Namespace, set the following label:
kubectl label namespace $K8S_NAMESPACE istio-injection=enabled
-
Export the Helm release name that you want to use. The release name must be unique for the chosen Namespace. Be aware that all resources in the cluster will be prefixed with that name. Run the following command:
export HELM_OTEL_RELEASE="otel-demo"
-
Update your Helm installation with the required Helm repository:
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts helm repo update
Run the Helm upgrade command, which installs the chart if not present yet.
helm upgrade --install --create-namespace -n $K8S_NAMESPACE $HELM_OTEL_RELEASE open-telemetry/opentelemetry-demo -f https://raw.githubusercontent.com/kyma-project/telemetry-manager/main/docs/user/integration/opentelemetry-demo/values.yaml
The previous command uses the values.yaml provided in this opentelemetry-demo
folder, which contains customized settings deviating from the default settings. The customizations in the provided values.yaml
cover the following areas:
- Disable the observability tooling provided with the chart
- Configure Kyma Telemetry instead
- Extend memory limits of the demo apps to avoid crashes caused by memory exhaustion
- Adjust initContainers and services of demo apps to work proper with Istio
Alternatively, you can create your own values.yaml
file and adjust the command.
To verify that the application is running properly, set up port forwarding and call the respective local hosts.
-
Verify the frontend:
kubectl -n $K8S_NAMESPACE port-forward svc/frontend-proxy 8080
open http://localhost:8080
-
Verify that traces and metrics arrive in your backend. Both traces and metrics are enriched with the typical resource attributes like k8s.namespace.name for easy selection.
-
Enable failures with the feature flag service:
kubectl -n $K8S_NAMESPACE port-forward svc/frontend-proxy 8080
open http://localhost:8080/feature/
-
Generate load with the load generator:
kubectl -n $K8S_NAMESPACE port-forward svc/frontend-proxy 8080
open http://localhost:8080/loadgen/
When you're done, you can remove the example and all its resources from the cluster by calling Helm:
helm delete -n $K8S_NAMESPACE $HELM_OTEL_RELEASE