fix lite module for transformers>=5.0#4488
Conversation
There was a problem hiding this comment.
Pull request overview
Fixes lmdeploy.lite quantization/calibration regressions when used with transformers>=5.0, focusing on newer nested config wrappers and numerical stability in AWQ smoothing.
Changes:
- Add fallback logic in calibration to unwrap nested HF config objects before reading head-count fields.
- Prevent potential overflow in AWQ scale normalization by computing extrema in float32.
- Switch
auto_awqto absolute imports forcalibrate/LAYER_TYPE_MAP.
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
| lmdeploy/lite/quantization/calibration.py | Unwrap nested config objects in _guess_num_heads; also includes new commented debug prints in the wrapped forward. |
| lmdeploy/lite/quantization/awq.py | Adjusts AWQ smooth_fc_fcs normalization to avoid float16/bfloat16 overflow. |
| lmdeploy/lite/apis/auto_awq.py | Changes relative import of calibrate/LAYER_TYPE_MAP to an absolute import. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| if hasattr(model.config, 'text_config'): | ||
| model.config = model.config.text_config | ||
| if hasattr(model.config, 'llm_config'): | ||
| model.config = model.config.llm_config |
There was a problem hiding this comment.
_guess_num_heads() mutates model.config by reassigning it to text_config / llm_config. This has side effects for the rest of calibration (e.g., later code uses model.config.hidden_size, use_cache, and config updates/saving) and can break models whose wrapper config contains fields not present on the nested config. Use a local variable (e.g., cfg = model.config and unwrap cfg), and leave model.config unchanged.
Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily receiving feedbacks. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.
Motivation
The [lmdeploy.lite] fails to quant/calibrate when running with [transformers >= 5.0] in some models.
Modification
lmdeploy/lite/quantization/calibration.py: Added fallback logic in _guess_num_heads() to unwrap nested config objects by checking for text_config and llm_config attributes before accessing head count parameters.
lmdeploy/lite/quantization/awq.py: Cast scales.max() and scales.min() to float32 before multiplication to prevent float16/bfloat16 overflow that produces inf.
lmdeploy/lite/apis/auto_awq.py: Changed the import of LAYER_TYPE_MAP and calibrate from a relative import to an absolute import to avoid potential circular import issues.
Checklist