You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Aug 7, 2024. It is now read-only.
bring back torch.autograd.Function for float8 matmul
Summary:
This is a redo of
#316
With upcoming support of scaling granularities other than tensorwise,
we need a good way to control which gemm kernel to call and how to scale
the input tensors in fwd and bwd. A `torch.autograd.Function` override
is the cleanest way to do that, and in 2024 this now works with
`torch.compile`.
Test Plan:
```
./test/test_everything.sh
```
Reviewers:
Subscribers:
Tasks:
Tags:
ghstack-source-id: 42dd595
Pull Request resolved: #336
0 commit comments