Skip to content

[SVLS-6367] Update Step Function Trace Context#34264

Merged
dd-mergequeue[bot] merged 15 commits intomainfrom
avedmala/lambda-sfn-jsonata-trace-context
Mar 5, 2025
Merged

[SVLS-6367] Update Step Function Trace Context#34264
dd-mergequeue[bot] merged 15 commits intomainfrom
avedmala/lambda-sfn-jsonata-trace-context

Conversation

@avedmala
Copy link
Contributor

@avedmala avedmala commented Feb 20, 2025

What does this PR do?

Updates the creation of trace context from a Step Function execution's context object to...

  1. Make use of State.RetryCount and Execution.RedriveCount for inferable parent ID generation in a backwards compatible manner (related to Update Step Functions Parent ID Generation datadog-lambda-python#559)
  2. Support the new trace context propagation for multi-level trace merging. These are cases where we're receiving a Step Function event but that Step Function has another parent service which can be either a Lambda or another Step Function (related Explicit trace ID propagation for SFN w/o Hashing datadog-lambda-python#537)

Motivation

Bring feature parity from Node and Python layers to the Universal runtimes

  1. Using these new values for parent ID generation will prevent collisions with "retry spans" which are Step Function spans that parent a Lambda. Without using the State.RetryCount the other values we use for the hash are identical across tries
  2. Multi-level trace merging is the future, it lets us merge an arbitrary number of Lambda and Step Function traces. The previous approach only lets us do a max depth of 2 services, losing the context after that. We're able to always preserve some about the top-most service or the root service to keep the Trace ID intact while using the context object to infer the parent ID.

Two things worth looking at for context:

Describe how you validated your changes

Built the extension and tried it with a Java Lambda (link to trace)

Screenshot 2025-02-26 at 2 06 12 PM

Possible Drawbacks / Trade-offs

Additional Notes

@github-actions github-actions bot added the medium review PR review might take time label Feb 20, 2025
@agent-platform-auto-pr
Copy link
Contributor

agent-platform-auto-pr bot commented Feb 20, 2025

Test changes on VM

Use this command from test-infra-definitions to manually test this PR changes on a VM:

inv aws.create-vm --pipeline-id=57848631 --os-family=ubuntu

Note: This applies to commit 6724b32

@agent-platform-auto-pr
Copy link
Contributor

agent-platform-auto-pr bot commented Feb 20, 2025

Uncompressed package size comparison

Comparison with ancestor 23bab27154076acd2518c183c203d642bf9fa002

Size reduction summary
package diff status size ancestor threshold
datadog-agent-aarch64-rpm -0.00MB 816.08MB 816.08MB 0.50MB
Diff per package
package diff status size ancestor threshold
datadog-agent-x86_64-rpm 0.00MB 825.16MB 825.16MB 0.50MB
datadog-agent-x86_64-suse 0.00MB 825.16MB 825.16MB 0.50MB
datadog-agent-amd64-deb 0.00MB 815.37MB 815.37MB 0.50MB
datadog-agent-arm64-deb 0.00MB 806.30MB 806.30MB 0.50MB
datadog-dogstatsd-amd64-deb 0.00MB 39.43MB 39.43MB 0.50MB
datadog-dogstatsd-x86_64-rpm 0.00MB 39.51MB 39.51MB 0.50MB
datadog-dogstatsd-x86_64-suse 0.00MB 39.51MB 39.51MB 0.50MB
datadog-dogstatsd-arm64-deb 0.00MB 37.97MB 37.97MB 0.50MB
datadog-heroku-agent-amd64-deb 0.00MB 440.71MB 440.71MB 0.50MB
datadog-iot-agent-amd64-deb 0.00MB 62.10MB 62.10MB 0.50MB
datadog-iot-agent-x86_64-rpm 0.00MB 62.17MB 62.17MB 0.50MB
datadog-iot-agent-x86_64-suse 0.00MB 62.17MB 62.17MB 0.50MB
datadog-iot-agent-arm64-deb 0.00MB 59.33MB 59.33MB 0.50MB
datadog-iot-agent-aarch64-rpm 0.00MB 59.40MB 59.40MB 0.50MB

Decision

✅ Passed

@agent-platform-auto-pr
Copy link
Contributor

agent-platform-auto-pr bot commented Feb 20, 2025

Static quality checks ✅

Please find below the results from static quality gates

Successful checks

Info

Result Quality gate On disk size On disk size limit On wire size On wire size limit
static_quality_gate_agent_deb_amd64 789.05MiB 801.8MiB 192.32MiB 202.62MiB
static_quality_gate_agent_deb_arm64 780.42MiB 793.14MiB 174.05MiB 184.51MiB
static_quality_gate_agent_rpm_amd64 788.96MiB 801.79MiB 194.39MiB 205.03MiB
static_quality_gate_agent_rpm_arm64 780.41MiB 793.09MiB 175.82MiB 186.44MiB
static_quality_gate_agent_suse_amd64 789.1MiB 801.81MiB 194.39MiB 205.03MiB
static_quality_gate_agent_suse_arm64 780.29MiB 793.14MiB 175.82MiB 186.44MiB
static_quality_gate_dogstatsd_deb_amd64 37.68MiB 47.67MiB 9.78MiB 19.78MiB
static_quality_gate_dogstatsd_deb_arm64 36.28MiB 46.27MiB 8.49MiB 18.49MiB
static_quality_gate_dogstatsd_rpm_amd64 37.68MiB 47.67MiB 9.79MiB 19.79MiB
static_quality_gate_dogstatsd_suse_amd64 37.68MiB 47.67MiB 9.79MiB 19.79MiB
static_quality_gate_iot_agent_deb_amd64 59.3MiB 69.0MiB 14.9MiB 24.8MiB
static_quality_gate_iot_agent_deb_arm64 56.66MiB 66.4MiB 12.86MiB 22.8MiB
static_quality_gate_iot_agent_rpm_amd64 59.3MiB 69.0MiB 14.92MiB 24.8MiB
static_quality_gate_iot_agent_rpm_arm64 56.66MiB 66.4MiB 12.86MiB 22.8MiB
static_quality_gate_iot_agent_suse_amd64 59.3MiB 69.0MiB 14.92MiB 24.8MiB
static_quality_gate_docker_agent_amd64 873.74MiB 886.12MiB 293.97MiB 304.21MiB
static_quality_gate_docker_agent_arm64 888.36MiB 900.79MiB 280.21MiB 290.47MiB
static_quality_gate_docker_agent_jmx_amd64 1.05GiB 1.06GiB 369.08MiB 379.33MiB
static_quality_gate_docker_agent_jmx_arm64 1.05GiB 1.06GiB 351.29MiB 361.55MiB
static_quality_gate_docker_dogstatsd_amd64 45.82MiB 55.78MiB 17.29MiB 27.28MiB
static_quality_gate_docker_dogstatsd_arm64 44.47MiB 54.45MiB 16.16MiB 26.16MiB
static_quality_gate_docker_cluster_agent_amd64 265.01MiB 274.78MiB 106.37MiB 116.28MiB
static_quality_gate_docker_cluster_agent_arm64 280.97MiB 290.82MiB 101.21MiB 111.12MiB

@cit-pr-commenter
Copy link

cit-pr-commenter bot commented Feb 20, 2025

Regression Detector

Regression Detector Results

Metrics dashboard
Target profiles
Run ID: 29300ef3-b1c2-498a-a23d-d558360582bb

Baseline: 23bab27
Comparison: 6724b32
Diff

Optimization Goals: ✅ No significant changes detected

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI trials links
quality_gate_logs % cpu utilization +2.01 [-1.03, +5.06] 1 Logs
quality_gate_idle memory utilization +0.55 [+0.50, +0.60] 1 Logs bounds checks dashboard
file_tree memory utilization +0.46 [+0.39, +0.52] 1 Logs
quality_gate_idle_all_features memory utilization +0.16 [+0.12, +0.19] 1 Logs bounds checks dashboard
file_to_blackhole_0ms_latency_http1 egress throughput +0.06 [-0.73, +0.85] 1 Logs
file_to_blackhole_500ms_latency egress throughput +0.04 [-0.74, +0.83] 1 Logs
file_to_blackhole_0ms_latency egress throughput +0.03 [-0.74, +0.80] 1 Logs
file_to_blackhole_100ms_latency egress throughput +0.01 [-0.69, +0.70] 1 Logs
tcp_dd_logs_filter_exclude ingress throughput +0.00 [-0.02, +0.02] 1 Logs
file_to_blackhole_0ms_latency_http2 egress throughput -0.00 [-0.79, +0.79] 1 Logs
uds_dogstatsd_to_api ingress throughput -0.01 [-0.28, +0.26] 1 Logs
file_to_blackhole_300ms_latency egress throughput -0.01 [-0.64, +0.62] 1 Logs
file_to_blackhole_1000ms_latency_linear_load egress throughput -0.09 [-0.55, +0.37] 1 Logs
tcp_syslog_to_blackhole ingress throughput -0.56 [-0.63, -0.50] 1 Logs
file_to_blackhole_1000ms_latency egress throughput -0.92 [-1.70, -0.15] 1 Logs
uds_dogstatsd_to_api_cpu % cpu utilization -0.92 [-1.78, -0.07] 1 Logs

Bounds Checks: ✅ Passed

perf experiment bounds_check_name replicates_passed links
file_to_blackhole_0ms_latency lost_bytes 10/10
file_to_blackhole_0ms_latency memory_usage 10/10
file_to_blackhole_0ms_latency_http1 lost_bytes 10/10
file_to_blackhole_0ms_latency_http1 memory_usage 10/10
file_to_blackhole_0ms_latency_http2 lost_bytes 10/10
file_to_blackhole_0ms_latency_http2 memory_usage 10/10
file_to_blackhole_1000ms_latency memory_usage 10/10
file_to_blackhole_1000ms_latency_linear_load memory_usage 10/10
file_to_blackhole_100ms_latency lost_bytes 10/10
file_to_blackhole_100ms_latency memory_usage 10/10
file_to_blackhole_300ms_latency lost_bytes 10/10
file_to_blackhole_300ms_latency memory_usage 10/10
file_to_blackhole_500ms_latency lost_bytes 10/10
file_to_blackhole_500ms_latency memory_usage 10/10
quality_gate_idle intake_connections 10/10 bounds checks dashboard
quality_gate_idle memory_usage 10/10 bounds checks dashboard
quality_gate_idle_all_features intake_connections 10/10 bounds checks dashboard
quality_gate_idle_all_features memory_usage 10/10 bounds checks dashboard
quality_gate_logs intake_connections 10/10
quality_gate_logs lost_bytes 10/10
quality_gate_logs memory_usage 10/10

Explanation

Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

CI Pass/Fail Decision

Passed. All Quality Gates passed.

  • quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_idle, bounds check intake_connections: 10/10 replicas passed. Gate passed.
  • quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_idle_all_features, bounds check intake_connections: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.

@github-actions github-actions bot added long review PR is complex, plan time to review it and removed medium review PR review might take time labels Feb 21, 2025
@avedmala avedmala added qa/done QA done before merge and regressions are covered by tests medium review PR review might take time and removed long review PR is complex, plan time to review it labels Feb 21, 2025
@github-actions github-actions bot added long review PR is complex, plan time to review it and removed medium review PR review might take time labels Feb 21, 2025
@avedmala
Copy link
Contributor Author

/trigger-ci --variable RUN_ALL_BUILDS=true --variable RUN_KITCHEN_TESTS=true --variable RUN_E2E_TESTS=on --variable RUN_UNIT_TESTS=on --variable RUN_KMT_TESTS=on

@dd-devflow
Copy link

dd-devflow bot commented Feb 21, 2025

View all feedbacks in Devflow UI.
2025-02-21 21:58:25 UTC ℹ️ Start processing command /trigger-ci --variable RUN_ALL_BUILDS=true --variable RUN_KITCHEN_TESTS=true --variable RUN_E2E_TESTS=on --variable RUN_UNIT_TESTS=on --variable RUN_KMT_TESTS=on


2025-02-21 21:59:07 UTC ℹ️ Gitlab pipeline started

Started pipeline #56646487

@avedmala avedmala marked this pull request as ready for review February 21, 2025 21:59
@avedmala avedmala requested review from a team as code owners February 21, 2025 21:59
}

func (lp *LifecycleProcessor) initFromStepFunctionPayload(event events.StepFunctionPayload) {
lp.requestHandler.event = event
Copy link
Contributor Author

@avedmala avedmala Feb 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This value is never being used, the Event() interface{} in lifecycle.go was probably used by something before but not anymore so I removed both

}

// genericUnmarshal helps extract fields from _datadog.
func genericUnmarshal(data []byte, fieldMap map[string]interface{}) error {
Copy link
Contributor Author

@avedmala avedmala Feb 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The custom unmarshaler is a consequence of how I defined the types above.

For example, the StepFunctionPayload should be at the same top-level as the RootExecutionID and ServerlessVersion for type NestedStepFunctionPayload if we want to follow how the JSON payloads look.

But, I liked this nested approach so that we can deal with this shared context object the exact same way in all cases in carriers.go. We can simply pass the whole payload into extractTraceContextFromStepFunctionContext() without needing to deal with each case separately.

Happy to change this but this way felt cleaner overall. The unmarshaler is also easy to edit if we want to include more arguments in the future.

}
ev = eventPayload
case trigger.LegacyLambdaRootStepFunctionEvent:
var event events.StepFunctionEvent[events.LambdaRootStepFunctionPayload]
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It felt a little repetitive to repeat the whole case for every "legacy lambda" case when in reality the only difference is the whole payload is wrapped in a "Payload": {...}

But the alternative was to parse out this Payload somewhere upstream so we treat legacy vs non-legacy the same here. We're basically doing the same thing because the extractors won't know the difference. I also didn't want to modify the value as it's being passed down

return nil, errorNoStepFunctionContextFound
}

tc, err := extractTraceContextFromStepFunctionContext(event.Payload)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Being able to do this is why I liked the custom unmarshal route, otherwise we'd need to be dynamic about types to create some shared handler or parse out each of the values for each function here and in extractTraceContextFromLambdaRootStepFunctionContext()

@avedmala avedmala requested a review from purple4reina March 3, 2025 15:18
type StepFunctionEvent struct {
Payload StepFunctionPayload
type StepFunctionEvent[T any] struct {
Payload T
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is beautiful

Copy link
Contributor

@purple4reina purple4reina left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great!

@github-actions
Copy link
Contributor

github-actions bot commented Mar 5, 2025

Serverless Benchmark Results

BenchmarkStartEndInvocation comparison between 7877501 and fb9ca92.

tl;dr

Use these benchmarks as an insight tool during development.

  1. Skim down the vs base column in each chart. If there is a ~, then there was no statistically significant change to the benchmark. Otherwise, ensure the estimated percent change is either negative or very small.

  2. The last row of each chart is the geomean. Ensure this percentage is either negative or very small.

What is this benchmarking?

The BenchmarkStartEndInvocation compares the amount of time it takes to call the start-invocation and end-invocation endpoints. For universal instrumentation languages (Dotnet, Golang, Java, Ruby), this represents the majority of the duration overhead added by our tracing layer.

The benchmark is run using a large variety of lambda request payloads. In the charts below, there is one row for each event payload type.

How do I interpret these charts?

The charts below comes from benchstat. They represent the statistical change in duration (sec/op), memory overhead (B/op), and allocations (allocs/op).

The benchstat docs explain how to interpret these charts.

Before the comparison table, we see common file-level configuration. If there are benchmarks with different configuration (for example, from different packages), benchstat will print separate tables for each configuration.

The table then compares the two input files for each benchmark. It shows the median and 95% confidence interval summaries for each benchmark before and after the change, and an A/B comparison under "vs base". ... The p-value measures how likely it is that any differences were due to random chance (i.e., noise). The "~" means benchstat did not detect a statistically significant difference between the two inputs. ...

Note that "statistically significant" is not the same as "large": with enough low-noise data, even very small changes can be distinguished from noise and considered statistically significant. It is, of course, generally easier to distinguish large changes from noise.

Finally, the last row of the table shows the geometric mean of each column, giving an overall picture of how the benchmarks changed. Proportional changes in the geomean reflect proportional changes in the benchmarks. For example, given n benchmarks, if sec/op for one of them increases by a factor of 2, then the sec/op geomean will increase by a factor of ⁿ√2.

I need more help

First off, do not worry if the benchmarks are failing. They are not tests. The intention is for them to be a tool for you to use during development.

If you would like a hand interpreting the results come chat with us in #serverless-agent in the internal DataDog slack or in #serverless in the public DataDog slack. We're happy to help!

Benchmark stats

# TODO: these messages may be an indication of a real problem and
# should be investigated
"TIMESTAMP http: proxy error: context canceled",
# these are related to datadog-agent/pull/34351
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is the exact line where the logs are coming from, didn't bother figuring out the root cause but here it is in case anyone needs it in the future

https://github.com/DataDog/datadog-agent/blame/7877501df1d24ee2400938dead31bac9d5035309/comp/logs/agent/config/endpoints.go#L273

@avedmala
Copy link
Contributor Author

avedmala commented Mar 5, 2025

/merge

@dd-devflow
Copy link

dd-devflow bot commented Mar 5, 2025

View all feedbacks in Devflow UI.
2025-03-05 20:44:53 UTC ℹ️ Start processing command /merge


2025-03-05 20:44:59 UTC ℹ️ MergeQueue: waiting for PR to be ready

This merge request is not mergeable yet, because of pending checks/missing approvals. It will be added to the queue as soon as checks pass and/or get approvals.
Note: if you pushed new commits since the last approval, you may need additional approval.
You can remove it from the waiting list with /remove command.


2025-03-05 21:08:07 UTC ℹ️ MergeQueue: merge request added to the queue

The median merge time in main is 30m.


2025-03-05 21:36:53 UTC ℹ️ MergeQueue: This merge request was merged

@dd-mergequeue dd-mergequeue bot merged commit efd81dd into main Mar 5, 2025
248 of 250 checks passed
Copy link
Contributor

@duncanpharvey duncanpharvey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!
(reviewed fixes to log Serverless Integration tests)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

long review PR is complex, plan time to review it qa/done QA done before merge and regressions are covered by tests team/serverless-azure-gcp

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants