Skip to content

Conversation

@amy-why-3459
Copy link

@amy-why-3459 amy-why-3459 commented Nov 13, 2025

What this PR does / why we need it?

Does this PR introduce any user-facing change?

How was this patch tested?

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for encode-prefill-decode disaggregation by separating the encoder execution path. Workers can now be designated as encoder producers, which exclusively run the multimodal encoder and transfer the resulting embeddings, bypassing the decoding process. The implementation correctly adds logic for producer and consumer roles. However, I've identified a critical bug where consumer ranks incorrectly re-execute the encoder, which overwrites the embeddings they just received. My review provides a necessary fix for this issue.

Comment on lines +1790 to +1791
# Run the multimodal encoder if any.
self._execute_mm_encoder(scheduler_output)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

On a consumer rank in a disaggregated setup, maybe_get_ec_connector_output is responsible for receiving the encoder outputs and populating self.encoder_cache. However, the subsequent call to _execute_mm_encoder will re-run the encoder and overwrite these received embeddings. This is a correctness bug that defeats the purpose of receiving the embeddings and also introduces a significant performance overhead.

Producer ranks are handled earlier in execute_model and do not reach this code path. Therefore, this block is executed for consumer ranks and for non-disaggregated setups. The encoder should only be executed in a non-disaggregated setup. A consumer rank can be identified by has_ec_transfer() being true.

I suggest adding a condition to only run the encoder when not in a disaggregated setup (i.e., when has_ec_transfer() is false).

Suggested change
# Run the multimodal encoder if any.
self._execute_mm_encoder(scheduler_output)
# In a disaggregated setup, consumer ranks receive encoder outputs
# and should not run the encoder.
# In a non-disaggregated setup, we need to run the encoder.
# Producer ranks are handled in `execute_model` and do not reach here.
if not has_ec_transfer():
# Run the multimodal encoder if any.
self._execute_mm_encoder(scheduler_output)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant