-
Notifications
You must be signed in to change notification settings - Fork 561
revise doc #4185
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
revise doc #4185
Conversation
Signed-off-by: Angazenn <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request updates the documentation for ACL Graphs. I've provided one comment to improve clarity by addressing a potentially confusing parameter name (cudagraph_capture_sizes in an ACL graph context) and an undefined acronym ("ep"). These changes will help ensure the documentation is clear and unambiguous for developers.
| ``` | ||
|
|
||
| In vLLM, these thresholds are set by `cudagraph_capture_sizes`. The default capture sizes are like `[1,2,4,8,16,24,32,...,max_capture_size]`. You can customize capture sizes to get fine-grained control over performance. For example, we can set `cudagraph_capture_sizes` as `[1,2,4,6,12,18]` when running Qwen3-235B on decode node in large ep. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a helpful addition. However, to prevent potential confusion for developers, I suggest a couple of clarifications:
- Confusing Parameter Name: This document is about ACL Graphs for Ascend NPUs, but the parameter is named
cudagraph_capture_sizes. This is potentially misleading. It would be good to add a note clarifying that this parameter applies to both CUDA and ACL graphs. - Undefined Acronym: The term "ep" in "large ep" is ambiguous. Please spell it out (e.g., "expert parallelism") for clarity.
Given this is developer-facing documentation where clarity is crucial to prevent configuration errors, I've provided a suggestion to improve it.
| In vLLM, these thresholds are set by `cudagraph_capture_sizes`. The default capture sizes are like `[1,2,4,8,16,24,32,...,max_capture_size]`. You can customize capture sizes to get fine-grained control over performance. For example, we can set `cudagraph_capture_sizes` as `[1,2,4,6,12,18]` when running Qwen3-235B on decode node in large ep. | |
| In vLLM, these thresholds are set by `cudagraph_capture_sizes`. Note that this parameter applies to both CUDA graphs and Ascend's ACL graphs. The default capture sizes are like `[1,2,4,8,16,24,32,...,max_capture_size]`. You can customize capture sizes to get fine-grained control over performance. For example, we can set `cudagraph_capture_sizes` as `[1,2,4,6,12,18]` when running Qwen3-235B on a decode node in a large expert parallelism (EP) setup. |
|
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
What this PR does / why we need it?
Does this PR introduce any user-facing change?
How was this patch tested?