Skip to content

Conversation

@Angazenn
Copy link
Collaborator

@Angazenn Angazenn commented Nov 13, 2025

What this PR does / why we need it?

Does this PR introduce any user-facing change?

How was this patch tested?

Signed-off-by: Angazenn <[email protected]>
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the documentation for ACL Graphs. I've provided one comment to improve clarity by addressing a potentially confusing parameter name (cudagraph_capture_sizes in an ACL graph context) and an undefined acronym ("ep"). These changes will help ensure the documentation is clear and unambiguous for developers.

```

In vLLM, these thresholds are set by `cudagraph_capture_sizes`. The default capture sizes are like `[1,2,4,8,16,24,32,...,max_capture_size]`. You can customize capture sizes to get fine-grained control over performance. For example, we can set `cudagraph_capture_sizes` as `[1,2,4,6,12,18]` when running Qwen3-235B on decode node in large ep.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This is a helpful addition. However, to prevent potential confusion for developers, I suggest a couple of clarifications:

  1. Confusing Parameter Name: This document is about ACL Graphs for Ascend NPUs, but the parameter is named cudagraph_capture_sizes. This is potentially misleading. It would be good to add a note clarifying that this parameter applies to both CUDA and ACL graphs.
  2. Undefined Acronym: The term "ep" in "large ep" is ambiguous. Please spell it out (e.g., "expert parallelism") for clarity.

Given this is developer-facing documentation where clarity is crucial to prevent configuration errors, I've provided a suggestion to improve it.

Suggested change
In vLLM, these thresholds are set by `cudagraph_capture_sizes`. The default capture sizes are like `[1,2,4,8,16,24,32,...,max_capture_size]`. You can customize capture sizes to get fine-grained control over performance. For example, we can set `cudagraph_capture_sizes` as `[1,2,4,6,12,18]` when running Qwen3-235B on decode node in large ep.
In vLLM, these thresholds are set by `cudagraph_capture_sizes`. Note that this parameter applies to both CUDA graphs and Ascend's ACL graphs. The default capture sizes are like `[1,2,4,8,16,24,32,...,max_capture_size]`. You can customize capture sizes to get fine-grained control over performance. For example, we can set `cudagraph_capture_sizes` as `[1,2,4,6,12,18]` when running Qwen3-235B on a decode node in a large expert parallelism (EP) setup.

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

@github-actions github-actions bot added the documentation Improvements or additions to documentation label Nov 13, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant