Skip to content

Conversation

@GuoRen868
Copy link

@GuoRen868 GuoRen868 commented Nov 12, 2025

What this PR does / why we need it?
Does this PR introduce any user-facing change?
How was this patch tested?
vLLM version: v0.11.0
vLLM main: vllm-project/vllm@24d6314

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new custom operator DispatchGmmCombineDecode for the Ascend platform. The changes include the operator definition, kernel implementation, build scripts, and PyTorch bindings. My review has identified a few critical issues. There is a significant issue in the shell script csrc/build_aclnn.sh regarding environment variable setup which could cause silent failures. Another critical bug is in csrc/pytorch_npu_helper.hpp where tensor strides are calculated incorrectly, which will fail for non-contiguous tensors. Additionally, there's a confusing duplicated field in csrc/custom_ops/kernels/dispatch_gmm_combine_decode/op_kernel/dispatch_gmm_combine_decode_tiling.h that should be corrected to improve maintainability.


# install custom ops
./build_out/custom_ops/run/CANN_ascend910_93_ubuntu_aarch64.run --install-path=/usr/local/Ascend/ascend-toolkit/latest/opp/
source /usr/local/Ascend/ascend-toolkit/latest/opp/vendors/customize/bin/set_env.bash
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The source command on this line will only affect the environment of the script's execution shell. When this script is executed, it runs in a sub-shell, and any environment variables set within it are lost when the script finishes. If the intention is to modify the environment of the calling shell, this script should be sourced (e.g., source csrc/build_aclnn.sh) rather than executed. The #!/bin/bash shebang is misleading if the script is meant to be sourced. This can lead to silent failures in the environment setup.

Comment on lines +222 to +237

// 适配dispatch_gmm_combine_decode算子的weight入参
if (acl_data_type == ACL_INT8 && dimNum == 3) {
format = ACL_FORMAT_FRACTAL_NZ;
}

auto acl_tensor =
aclCreateTensor(at_tensor.sizes().data(), at_tensor.sizes().size(), acl_data_type, strides.data(),
0, format, at_tensor.sizes().data(), at_tensor.sizes().size(),
const_cast<void *>(at_tensor.storage().data()));

return acl_tensor;
}

inline aclScalar *ConvertType(const at::Scalar &at_scalar)
{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The calculation of tensor strides is incorrect as it assumes the tensor is contiguous. This will lead to incorrect memory access and data corruption for non-contiguous tensors. You should use the tensor's actual strides and storage offset provided by PyTorch via at_tensor.strides() and at_tensor.storage_offset().

    const auto dimNum = at_tensor.dim();
    aclFormat format = ACL_FORMAT_ND;

    // 适配dispatch_gmm_combine_decode算子的weight入参
    if (acl_data_type == ACL_INT8 && dimNum == 3) {
        format = ACL_FORMAT_FRACTAL_NZ;
    }

    auto acl_tensor =
        aclCreateTensor(at_tensor.sizes().data(), dimNum, acl_data_type, at_tensor.strides().data(),
                        at_tensor.storage_offset(), format, at_tensor.sizes().data(), dimNum,
                        const_cast<void *>(at_tensor.storage().data()));

Comment on lines +27 to +28
uint32_t aicNum; // aivNum
uint32_t aivNum; // aivNum
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

There appears to be a duplicated field aivNum and a confusing comment. The struct DispatchGmmCombineDecodeInfo has aicNum with comment // aivNum and aivNum with comment // aivNum. This is likely a copy-paste error and can lead to confusion and bugs. Please clarify the purpose of each field and correct the comments. For example, aicNum should probably be for AI Core count and aivNum for AI Vector count.

Suggested change
uint32_t aicNum; // aivNum
uint32_t aivNum; // aivNum
uint32_t aicNum; // aicNum
uint32_t aivNum; // aivNum

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant