-
Notifications
You must be signed in to change notification settings - Fork 607
Add export recipes for xnnpack (#12069) #12070
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/12070
Note: Links to docs will display an error until the docs builds have been completed. ❌ 41 New FailuresAs of commit ce2010d with merge base 85b91a4 ( NEW FAILURES - The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D77414795 |
This PR needs a
|
Summary: Enables basic export recipes for XNNPack backend described in pytorch#12069 Differential Revision: D77414795
b46b07e
to
ce2010d
Compare
This pull request was exported from Phabricator. Differential Revision: D77414795 |
name="dynamic_quant", | ||
quantization_recipe=quant_recipe, | ||
partitioners=[XnnpackPartitioner(config_precisions=ConfigPrecisionType.DYNAMIC_QUANT)], | ||
pre_edge_transform_passes=[DuplicateDynamicQuantChainPass()], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can remove this
|
||
|
||
RECIPE_MAP: dict[str, Callable[[], ExportRecipe]] = { | ||
"FP32_RECIPE": get_fp32_recipe, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we have a recipe for static int8? it would just be the same as dynamic config except is_dynamic = false
@@ -0,0 +1,95 @@ | |||
# Copyright (c) Meta Platforms, Inc. and affiliates. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think what would be a really good test is if we used these recipes on all of the example_models from aot_compiler:
https://github.com/pytorch/executorch/blob/main/examples/xnnpack/__init__.py#L29-L48
this would be a nice test to make sure these recipes all work.
Summary: Enables basic export recipes for XNNPack backend described in #12069
Differential Revision: D77414795