Skip to content

Qualcomm AI Engine Direct - Delegate mutable buffer and fix the mutable buffer issue #11782

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

shewu-quic
Copy link
Collaborator

@shewu-quic shewu-quic commented Jun 18, 2025

Summary:

  • Add a parameter to support mutable buffer delegation in QNN Backend
  • Avoid annotating the input node because mutable buffers will be folded during the convert_pt2e process.
  • Deprecated use_legacy_export in executorch llama

cc @cccclai @winskuo-quic @cbilgin

…le buffer issue

Summary:
- Add a parameter to support mutable buffer delegation in QNN Backend
  - Set the same memory address for I/O of mutable buffer at runtime
- Avoid annotating the input node because mutable buffers will be folded during the convert_pt2e process.
- Deprecated use_legacy_export in executorch llama
Copy link

pytorch-bot bot commented Jun 18, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/11782

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (1 Unrelated Failure)

As of commit 19c5aa1 with merge base 44d2643 (image):

BROKEN TRUNK - The following job failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jun 18, 2025
Copy link

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

@cccclai
Copy link
Contributor

cccclai commented Jun 18, 2025

Avoid annotating the input node because mutable buffers will be folded during the convert_pt2e process.

Is the input node still folded after we land pytorch/ao#2345?

@manuelcandales manuelcandales added the partner: qualcomm For backend delegation, kernels, demo, etc. from the 3rd-party partner, Qualcomm label Jun 18, 2025
@shewu-quic
Copy link
Collaborator Author

Avoid annotating the input node because mutable buffers will be folded during the convert_pt2e process.

Is the input node still folded after we land pytorch/ao#2345?

Yes, unless we apply run_decomposition after export. I think we can wait until run_decomposition becomes a pass and doesn't require re-tracing. After that we can change it back to annotate mutable buffer. What do you think?

@shewu-quic shewu-quic requested a review from mergennachin as a code owner June 19, 2025 04:15
@shewu-quic
Copy link
Collaborator Author

BTW, in previous, we have submitted a PR to deprecated convert_bmm_to_matmul pass. It will result in multiple partitions for Meta's llama due to not using to_edge_transform_and_lower_to_qnn. So, I add it back and set False for activation as default value.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. partner: qualcomm For backend delegation, kernels, demo, etc. from the 3rd-party partner, Qualcomm
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants