You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
File "/home/mukuro/projects/LLaMA-Factory/src/llamafactory/model/adapter.py", line 299, in init_adapter
model = _setup_lora_tuning(
^^^^^^^^^^^^^^^^^^^
File "/home/mukuro/projects/LLaMA-Factory/src/llamafactory/model/adapter.py", line 181, in _setup_lora_tuning
model: "LoraModel" = PeftModel.from_pretrained(model, adapter, **init_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mukuro/softwares/miniconda3/envs/qwen2.5/lib/python3.11/site-packages/peft/peft_model.py", line 545, in from_pretrained
model.load_adapter(
File "/home/mukuro/softwares/miniconda3/envs/qwen2.5/lib/python3.11/site-packages/peft/peft_model.py", line 1151, in load_adapter
self._update_offload(offload_index, adapters_weights)
File "/home/mukuro/softwares/miniconda3/envs/qwen2.5/lib/python3.11/site-packages/peft/peft_model.py", line 1028, in _update_offload
safe_module = dict(self.named_modules())[extended_prefix]
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
KeyError: 'base_model.model.model.model.layers.14.mlp.down_proj'
Who can help?
No response
Information
The official example scripts
My own modified scripts
Tasks
An officially supported task in the examples folder
My own task or dataset (give details below)
Reproduction
llamafactory 使用 peft 0.12.0
Expected behavior
我希望能解决这个问题正常合并
The text was updated successfully, but these errors were encountered:
if index:
print("index", index)
print('model.layers.14.mlp.gate_proj.weight' in index)
offload_index = {
p: {
"safetensors_file": index[p]["safetensors_file"],
"weight_name": p,
"dtype": str(weight_map[p].dtype).replace("torch.", ""),
}
for p in weight_map.keys()
if p in disk_modules
}
kwargs["offload_index"] = offload_index
if (getattr(model, "hf_device_map", None) is not None) and len(
set(model.hf_device_map.values()).intersection({"cpu", "disk"})
) > 0:
remove_hook_from_submodules(model)
System Info
File "/home/mukuro/projects/LLaMA-Factory/src/llamafactory/model/adapter.py", line 299, in init_adapter
model = _setup_lora_tuning(
^^^^^^^^^^^^^^^^^^^
File "/home/mukuro/projects/LLaMA-Factory/src/llamafactory/model/adapter.py", line 181, in _setup_lora_tuning
model: "LoraModel" = PeftModel.from_pretrained(model, adapter, **init_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mukuro/softwares/miniconda3/envs/qwen2.5/lib/python3.11/site-packages/peft/peft_model.py", line 545, in from_pretrained
model.load_adapter(
File "/home/mukuro/softwares/miniconda3/envs/qwen2.5/lib/python3.11/site-packages/peft/peft_model.py", line 1151, in load_adapter
self._update_offload(offload_index, adapters_weights)
File "/home/mukuro/softwares/miniconda3/envs/qwen2.5/lib/python3.11/site-packages/peft/peft_model.py", line 1028, in _update_offload
safe_module = dict(self.named_modules())[extended_prefix]
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
KeyError: 'base_model.model.model.model.layers.14.mlp.down_proj'
Who can help?
No response
Information
Tasks
examples
folderReproduction
llamafactory 使用 peft 0.12.0
Expected behavior
我希望能解决这个问题正常合并
The text was updated successfully, but these errors were encountered: