-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[rewriter | torchlib] respect ops order in torchscript graph #2134
[rewriter | torchlib] respect ops order in torchscript graph #2134
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2134 +/- ##
=======================================
Coverage 72.86% 72.86%
=======================================
Files 222 222
Lines 29287 29287
Branches 3452 3452
=======================================
Hits 21340 21340
Misses 6798 6798
Partials 1149 1149 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see that based on https://github.com/microsoft/onnxruntime/blob/main/onnxruntime/python/tools/transformers/fusion_fastgelu.py
The previous order matches the fuse_4 pattern and the new order matches the fuse_1 pattern, just a general question whether it might be a good idea to support both these patterns, maybe via using the check function/control parameter?
No the previous pattern does not match anything. It has Pow but fuse_4 does not. |
You're right, Although I wonder about and if we would need cover the second pattern as well |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
This helps us to match the optimization pattern in https://github.com/microsoft/onnxruntime/blob/main/onnxruntime/python/tools/transformers/fusion_fastgelu.py
ref: #2132 (comment)