-
Notifications
You must be signed in to change notification settings - Fork 1.2k
[DOCS-11765] Add OP Splunk HEC Distribution of OTel and DDOT #32704
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Preview links (active after the
|
|
|
||
| To send logs from DDOT: | ||
| 1. Deploy the DDOT Agent using helm. See [Install the DDOT Collector as a Kubernetes DaemonSet][5] for instructions. | ||
| 1. [Set up a pipeline][6] on Observabiity Pipelines using the [OpenTelemetry source](#set-up-the-source-in-the-pipeline-ui). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
*Observability
|
|
||
| ## Send logs from the Datadog Distribution of OpenTelemetry Collector to Observability Pipelines | ||
|
|
||
| To send logs from DDOT: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To send logs from the Datadog Distribution of the OpenTelemetry Collector (DDOT):
| ## Send logs from the Datadog Distribution of OpenTelemetry Collector to Observability Pipelines | ||
|
|
||
| To send logs from DDOT: | ||
| 1. Deploy the DDOT Agent using helm. See [Install the DDOT Collector as a Kubernetes DaemonSet][5] for instructions. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Change from:
"Deploy the DDOT Agent"
To:
"Deploy the DDOT Collector"
|
|
||
| {{% observability_pipelines/log_source_configuration/splunk_hec %}} | ||
|
|
||
| ## Send logs from the Splunk Distributor of the OpenTelemetry collector to Observability Pipelines |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Capitalize "Collector" here. Also, change "Distributor" to "Distribution"
|
|
||
| ## Send logs from the Splunk Distributor of the OpenTelemetry collector to Observability Pipelines | ||
|
|
||
| To send logs from the Splunk Distributor of the OpenTelemetry Collector: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
change "Distributor" to "Distribution"
|
|
||
| Use Observability Pipelines' Datadog Agent source to receive logs from the Datadog Agent. Select and set up this source when you [set up a pipeline][1]. | ||
|
|
||
| **Note**: If you are using the Datadog Distribution of OpenTelemetry Collector, you must [use the OpenTelemetry source to send logs to Observability Pipelines][4]. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add (DDOT) after "using the Datadog Distribution of OpenTelemetry Collector"
ckelner
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left a few comments, I think there's some things to address.
| 1. Install the Splunk OpenTelemetry Collector based on the your environment: | ||
| - [Kubernetes][2] | ||
| - [Linux][3] | ||
| 2. Configure the Splunk OpenTelemetry Collector: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should swap steps two and three as seen here: https://github.com/DataDog/logs-psa-private/tree/main/POCs-Opps-hacks/splunk-otel-op#install-and-setup-op -- because you will need the IP Address / Load Balancer URL of OP before you can configure your Splunk OTel Collector.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Swapped the steps and added a note about firewalls.
| # Splunk HEC endpoint URL, if forwarding to Splunk Observability Cloud | ||
| # SPLUNK_HEC_URL=https://ingest.us0.signalfx.com/v1/log | ||
| # If you're forwarding to a Splunk Enterprise instance running on example.com, with HEC at port 8088: | ||
| SPLUNK_HEC_URL=http://0.0.0.0:8088/services/collector |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should replace this with placeholder text. 0.0.0.0 will never work here. It should be the actual addressable IP address or Load balancer URL for OP(s). So something like where we've done this elsewhere in the docs like <IP_ADDRESS_OR_LOAD_BALANCER_URL_FOR_OP> -- though that's a bit of a mouthful.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, I updated it to the <OPW_HOST> placeholder and used the explanation we use in the other parts of the doc. Let me know if that correct.
ckelner
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
More comments! Sorry!
| 1. When you install the Worker, for the OpenTelemetry source environment variables: | ||
| 1. Set your HTTP listener to `0.0.0.0:4318`. | ||
| 1. Set your gRPC listener to `0.0.0.0:4317`. | ||
| 1. After you installed the Worker and deployed the pipeline, update the OpenTelemetry Collector's `collector-config.yaml` to include an exporter that sends logs to Observability Pipelines. For example: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure how to properly call this file is or if it is commonly named that. @krlv do you have any suggestion?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From this doc, it seems like it'd just be called config.yaml? https://opentelemetry.io/docs/collector/configuration/#location
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here: https://docs.datadoghq.com/opentelemetry/setup/ddot_collector/install/kubernetes_daemonset?tab=helm#configure-the-opentelemetry-collector -- we call it otel-config.yaml 🤣 -- maybe we use that? (Should we also maybe link to that section of docs so they know what we are talking about?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Works for me :D I updated and linked to that doc. Is that what you are thinking?
Also updated the help upgrade command with otel-config.yaml.
| **Notes**: | ||
| - These settings are used when setting up the Datadog Agent source in Observability Pipelines: | ||
| - `DD_OBSERVABILITY_PIPELINES_WORKER_LOGS_ENABLED:true` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can probably just leave it DD_OBSERVABILITY_PIPELINES_WORKER_LOGS_ENABLED and DD_OBSERVABILITY_PIPELINES_WORKER_LOGS_URL without the values I think, WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Works for me!
| - These settings are used when setting up the Datadog Agent source in Observability Pipelines: | ||
| - `DD_OBSERVABILITY_PIPELINES_WORKER_LOGS_ENABLED:true` | ||
| - `DD_OBSERVABILITY_PIPELINES_WORKER_LOGS_URL:"http://opw-observability-pipelines-worker.default.svc.cluster.local:4317/v1/logs"` | ||
| <br><br>These settings do **not** work when setting up DDOT with the OpenTelemetry source, because the OpenTelemetry Collector's `collector-config.yaml` must be configured with those settings using the exporter. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The real explanation here is that we aren't using the logs agent here at all, we're using the bundled OTel collector aka DDOT, so standard logs isn't carrying any payload so these environment variables won't affect how DDOT is working. I don't know if you want to try to wordsmith that. Maybe @krlv can phrase/explain it better than I can.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! I updated it a bit, is that correct? I said Datadog Agent, instead of the log agent, does that work?
| otlphttp: | ||
| endpoint: http://opw-observability-pipelines-worker.default.svc.cluster.local:4318 | ||
| ... | ||
| service: pipelines: logs: exporters: [otlphttp] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@maycmlee I might have misled you but I would put each of these on a new line and indent them yaml style
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh lol, I was wondering what that was. I fixed it.
ckelner
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Thanks for going a few rounds with me 💪 🎉
What does this PR do? What is the motivation?
Merge instructions
Merge readiness:
For Datadog employees:
Your branch name MUST follow the
<name>/<description>convention and include the forward slash (/). Without this format, your pull request will not pass CI, the GitLab pipeline will not run, and you won't get a branch preview. Getting a branch preview makes it easier for us to check any issues with your PR, such as broken links.If your branch doesn't follow this format, rename it or create a new branch and PR.
[6/5/2025] Merge queue has been disabled on the documentation repo. If you have write access to the repo, the PR has been reviewed by a Documentation team member, and all of the required checks have passed, you can use the Squash and Merge button to merge the PR. If you don't have write access, or you need help, reach out in the #documentation channel in Slack.
Additional notes