Exporting Audit Events to Datadog with Fluentd in Kubernetes #46821
pnrao1983
started this conversation in
Show and tell
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
In this guide, we’ll walk through the setup of Fluentd to forward logs to Datadog using a Kubernetes deployment. This setup utilizes two pods: one for the Teleport Event Handler authenticates to the Teleport Auth Service to receive audit events over a gRPC stream, then second POD sends those events to Fluentd as JSON payloads over a secure channel established via mutual TLS.
Setup Overview
To follow this guide, refer to the Datadog Integration Documentation.
Important Notes
Make sure you adjust the URL:
• https://fluentd.fluentd.svc.cluster.local/events.log
This DNS name should reflect your specific namespace.
Datadog Integration Documentation Update
Host Update Based on Datadog Region:
When configuring the Datadog integration, ensure the host field for log intake is updated based on your Datadog region. Here’s the mapping:
US (Default): app.datadoghq.com
US3: us3.datadoghq.com
US5: us5.datadoghq.com
Europe (EU): app.datadoghq.eu
Federal: app.ddog-gov.com
Asia-Pacific (AP1): ap1.datadoghq.com
Asia-Pacific (AP2): ap2.datadoghq.com
Determine Your Datadog Region
To find out which Datadog region you’re using after logging in, you can follow these steps:
Method 1: Check the URL in the Browser
Method 2: View the API and Application Keys Page
Deployment Instructions
Set Environment Variables:
TELEPORT_CLUSTER_ADDRESS=mytenant.teleport.sh:443 # update your cluster name
Run Docker Command:
docker run -v `pwd`:/opt/teleport-plugin -w /opt/teleport-plugin public.ecr.aws/gravitational/teleport-plugin-event-handler:16.4.0 configure . ${TELEPORT_CLUSTER_ADDRESS?}
Create Certificates:
Modify the certificates based on your
Fluentd
forwarder pod’s DNS name. Before you generate newserver.crt
&server.key
rename the existing server.crt and server.key files. In this test case, I am using datadog namespace, so my DNS:fluentd.datadog.svc.cluster.local
Create updated Server Certificates:
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout server.key -out server.crt -subj "/CN=localhost" -addext "subjectAltName=DNS:fluentd.datadog.svc.cluster.local" -CA "ca.crt" -CAkey "ca.key"
Here is the complete list of config we are going to use
Create Kubernetes Secrets and ConfigMaps:
Define your certificates and configurations in YAML files.
To decode the .crt and .key files using base64 command:
Here is the ConfigMap for fluentd-config:
cat fluentd-config.yaml
Fluentd POD config to run as a deployment
cat fluentd_deploy_pod.yaml
fluentd-service
cat fluentd-service.yaml
teleport-plugin-event-handler-values
cat teleport-plugin-event-handler-values.yaml
Apply Kubernetes Resources:
Install the Teleport Plugin Event Handler:
helm install -n datadog teleport-plugin-event-handler teleport/teleport-plugin-event-handler --values teleport-plugin-event-handler-values.yaml --version 16.3.0
Check Deployment Status:
kubectl get all -n datadog
Check both the pod logs:
My events are getting forwarded now:
Troubleshooting
During setup, you may encounter some errors. Here are common issues and their resolutions:
[warn]: #0 no patterns matched tag="events.log"
tls: failed to verify certificate: x509: certificate is valid for localhost, not fluentd.teleport.svc.cluster.local
Conclusion
After following these steps, your events should be successfully uploaded to Datadog. If you encounter issues, refer to the troubleshooting section above or consult the logs for more details.
Beta Was this translation helpful? Give feedback.
All reactions