Skip to content

improve sandbox access resilience through ServiceFQDN#245

Open
FAUST-BENCHOU wants to merge 2 commits intovolcano-sh:mainfrom
FAUST-BENCHOU:feat/resilience
Open

improve sandbox access resilience through ServiceFQDN#245
FAUST-BENCHOU wants to merge 2 commits intovolcano-sh:mainfrom
FAUST-BENCHOU:feat/resilience

Conversation

@FAUST-BENCHOU
Copy link
Contributor

What type of PR is this?

What this PR does / why we need it:

Which issue(s) this PR fixes:
Fixes #205

Special notes for your reviewer:

Does this PR introduce a user-facing change?:


Signed-off-by: zhoujinyu <2319109590@qq.com>
Signed-off-by: zhoujinyu <2319109590@qq.com>
Copilot AI review requested due to automatic review settings March 20, 2026 14:20
@volcano-sh-bot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign yaozengzeng for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly improves the resilience of sandbox access by prioritizing the use of Kubernetes ServiceFQDN for connecting to sandboxes. This change ensures that ongoing sessions can gracefully recover from underlying pod evictions or recreations, as the ServiceFQDN remains stable while pod IPs might change. A fallback to direct pod IP is maintained for compatibility, and comprehensive testing has been added to validate this enhanced stability.

Highlights

  • Enhanced Sandbox Access Resilience: Implemented a preference for Kubernetes ServiceFQDN over direct pod IP for sandbox access, significantly improving resilience against pod evictions and recreations.
  • Fallback Mechanism: Introduced a fallback to retrieve the pod IP if the ServiceFQDN is not available in the sandbox status, ensuring backward compatibility.
  • Comprehensive Testing: Added a new end-to-end test to specifically validate sandbox access resilience after a pod restart, confirming that sessions remain active through ServiceFQDN. Unit tests were also expanded to cover both ServiceFQDN and pod IP fallback scenarios.
  • Code Refactoring: Refactored the buildSandboxInfo function to accept a generic host parameter, accommodating both ServiceFQDN and pod IP for constructing sandbox entry points.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request improves the resilience of sandbox access by preferring the Kubernetes Service FQDN over the pod IP. This is a solid improvement that makes the system more robust against pod restarts and evictions. The fallback to using the pod IP ensures backward compatibility. The changes are well-implemented and are accompanied by thorough unit tests and a new end-to-end test that effectively validates the resilience feature. I have one suggestion regarding a duplicated constant to improve maintainability.

Comment on lines +60 to +61
// sessionIDLabelKey is the label key for session ID on Sandbox (matches workloadmanager.SessionIdLabelKey)
sessionIDLabelKey = "runtime.agentcube.io/session-id"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To improve maintainability and avoid potential drift, it's better to use a single source of truth for this label key. You've correctly noted in the comment that this matches workloadmanager.SessionIdLabelKey.

Instead of duplicating the constant, consider moving workloadmanager.SessionIdLabelKey to a shared package (e.g., pkg/common/types) and using it here and in the workloadmanager package. This ensures consistency across the codebase.

For example, you could add it to pkg/common/types/types.go:

const (
    // ... other constants
    SessionIDLabelKey = "runtime.agentcube.io/session-id"
)

Then, you can import github.com/volcano-sh/agentcube/pkg/common/types and use types.SessionIDLabelKey here, removing the local constant definition.

@FAUST-BENCHOU
Copy link
Contributor Author

@YaoZengzeng @hzxuzhonghu I'm not really sure if the introduction of ServiceFQDN affected the design intent of agentcube.
anyway just enjoy your Netherlands trip :)

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR improves sandbox access resilience in the Workload Manager by preferring a stable ServiceFQDN (instead of a one-time pod IP lookup) when building sandbox entrypoints, and adds/updates tests to validate the behavior—including an e2e scenario that deletes the sandbox pod and re-invokes with the same session.

Changes:

  • Prefer Sandbox.Status.ServiceFQDN as the sandbox endpoint host, with a fallback to pod IP lookup when ServiceFQDN is empty.
  • Update unit tests around sandbox info/handler behavior to cover ServiceFQDN and fallback behavior.
  • Add an e2e test that deletes the sandbox pod and verifies the same session still works after recreation.

Reviewed changes

Copilot reviewed 5 out of 5 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
pkg/workloadmanager/handlers.go Prefer ServiceFQDN for sandbox access; fallback to pod IP only when needed.
pkg/workloadmanager/sandbox_helper.go Build entrypoint endpoints from a generalized host (FQDN or IP).
pkg/workloadmanager/sandbox_helper_test.go Extend sandbox info tests to validate ServiceFQDN endpoint formatting.
pkg/workloadmanager/handlers_test.go Add coverage for ServiceFQDN vs pod-IP fallback behavior in createSandbox.
test/e2e/e2e_test.go Add e2e test verifying sandbox access after pod deletion/recreation; add helper to find sandbox by session label.

Comment on lines +212 to +215
host, err = s.k8sClient.GetSandboxPodIP(ctx, sandbox.Namespace, sandbox.Name, sandboxPodName)
if err != nil {
return nil, fmt.Errorf("failed to get sandbox %s/%s pod IP: %v", sandbox.Namespace, sandbox.Name, err)
}
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The error returned from GetSandboxPodIP is formatted with %v, which drops wrapping and prevents callers/tests from using errors.Is / errors.As on the underlying error. Use %w when embedding the underlying error (and keep the rest of the message the same).

Copilot uses AI. Check for mistakes.
Comment on lines +983 to +995
// 3. Wait for new pod to be ready (agent-sandbox reconciles and recreates)
require.Eventually(t, func() bool {
newPod, err := ctx.getPodByOwner(namespace, "Sandbox", sandbox.Name)
if err != nil || newPod == nil {
return false
}
for _, c := range newPod.Status.Conditions {
if c.Type == corev1.PodReady && c.Status == corev1.ConditionTrue {
return true
}
}
return false
}, 3*time.Minute, 5*time.Second, "New pod should become ready after recreation")
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The wait for the "new pod" becoming ready can succeed while the deleted pod is still terminating (it may still be returned by getPodByOwner and may still have PodReady=true), which makes this test a potential false positive/flaky. Capture the original pod UID (or deletion timestamp) before deleting, and in the Eventually predicate require that the pod UID differs from the original and that DeletionTimestamp is nil in addition to PodReady=true.

Copilot uses AI. Check for mistakes.
@codecov-commenter
Copy link

⚠️ Please install the 'codecov app svg image' to ensure uploads and comments are reliably processed by Codecov.

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 43.42%. Comparing base (845b798) to head (60c1774).
⚠️ Report is 129 commits behind head on main.
❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #245      +/-   ##
==========================================
+ Coverage   35.60%   43.42%   +7.81%     
==========================================
  Files          29       30       +1     
  Lines        2533     2614      +81     
==========================================
+ Hits          902     1135     +233     
+ Misses       1505     1358     -147     
+ Partials      126      121       -5     
Flag Coverage Δ
unittests 43.42% <100.00%> (+7.81%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@acsoto
Copy link
Contributor

acsoto commented Mar 23, 2026

This e2e can still pass without ever observing the replacement pod. After deletion, getPodByOwner(...) may return the terminating old pod, and it also fails if old/new pods overlap briefly. Please record the deleted pod UID/name and wait for a different non-terminating Ready pod before the second invoke.

return nil, fmt.Errorf("failed to get sandbox %s/%s pod IP: %v", sandbox.Namespace, sandbox.Name, err)
// Prefer ServiceFQDN: K8s Endpoints controller keeps DNS in sync when pod is evicted/recreated.
// Fallback to pod IP for agent-sandbox without ServiceFQDN.
host := createdSandbox.Status.ServiceFQDN
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

createdSandbox.Status.ServiceFQDN is preferred here, but upstream agent-sandbox currently hardcodes that field to .svc.cluster.local. If a cluster uses a different DNS domain, this may regress cases that previously worked via pod IP fallback.

Path: port.PathPrefix,
Protocol: string(port.Protocol),
Endpoint: net.JoinHostPort(podIP, strconv.Itoa(int(port.Port))),
Endpoint: net.JoinHostPort(host, strconv.Itoa(int(port.Port))),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changing Endpoint: net.JoinHostPort(host, ...) means entryPoints[].endpoint now effectively becomes a service DNS name instead of a pod IP-style address. Router usage looks fine, but it would be good to confirm there are no external consumers depending on the old behavior and to update the API docs if this is intentional.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature request] Improve sandbox access resilience

5 participants