-
Notifications
You must be signed in to change notification settings - Fork 220
Issues: pytorch/ao
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
[Feature request] Make offset functions part of ukernel configs in torchao/experimental
cpu
enhancement
New feature or request
#1745
opened Feb 20, 2025 by
metascroy
[Feature request] create utils file for common functions like roundup in torchao experimental kernels
#1744
opened Feb 20, 2025 by
metascroy
[Feature request] Use guard TORCHAO_ENABLE_ARM_DOTPROD in torchao code
build
enhancement
New feature or request
#1743
opened Feb 20, 2025 by
metascroy
verify performance and numerics of float8 training with rowwise scaling
#1732
opened Feb 18, 2025 by
vkuzo
[Question] Static Quantization for Open-Source LLMs
quantize
question
Further information is requested
#1724
opened Feb 18, 2025 by
yang-ahuan
tracking removal of the
set_inductor_config
argument from quantize_
#1715
opened Feb 14, 2025 by
vkuzo
Model size after quantization
quantize
question
Further information is requested
#1701
opened Feb 11, 2025 by
TaylorYangX
[DOC] Questions on Integrating a New CPU Operator into TorchAO?
cpu
question
Further information is requested
#1699
opened Feb 11, 2025 by
Zijie-Tian
[
Fp8 Training Feature Request
] Smooth SwiGlu and Configurable AdamWFp8
#1691
opened Feb 10, 2025 by
vasqu
migration of
quantize_
workflow configuration from callables to configs
#1690
opened Feb 10, 2025 by
vkuzo
Performance comparison
NF4Tensor
vs. BNB Params4bit
performance
#1686
opened Feb 10, 2025 by
psinger
[Feature Request] Add bias support for torchao/experimental ops
#1675
opened Feb 5, 2025 by
metascroy
Tensor subclass methods for Further information is requested
DTensor
and FSDP2
question
#1664
opened Feb 5, 2025 by
jeromeku
[Needs more investigation] Something isn't working
quantize
int8_weight_only
via quantize_()
API on torch.float16
models results in NaN values across multiple CPU architectures
bug
#1662
opened Feb 4, 2025 by
vmpuri
[Doc] gemlite version
question
Further information is requested
topic: documentation
Use this tag if this PR adds or improves documentation
#1653
opened Feb 3, 2025 by
bhack
Unittests Migration Progress
good first issue
Good for newcomers
#1621
opened Jan 26, 2025 by
osbm
24 of 74 tasks
Previous Next
ProTip!
no:milestone will show everything without a milestone.