We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(.venv) raphy@raohy:~/llama.cpp/ggify$ pip install -e . Obtaining file:///home/raphy/llama.cpp/ggify Installing build dependencies ... done Checking if build backend supports build_editable ... done Getting requirements to build editable ... done Installing backend dependencies ... done Preparing editable metadata (pyproject.toml) ... done Requirement already satisfied: huggingface-hub~=0.23.0 in /home/raphy/llama.cpp/.venv/lib/python3.12/site-packages (from ggify==0.1.0) (0.23.5) Requirement already satisfied: tqdm~=4.66.5 in /home/raphy/llama.cpp/.venv/lib/python3.12/site-packages (from ggify==0.1.0) (4.66.6) Requirement already satisfied: filelock in /home/raphy/llama.cpp/.venv/lib/python3.12/site-packages (from huggingface-hub~=0.23.0->ggify==0.1.0) (3.17.0) Requirement already satisfied: fsspec>=2023.5.0 in /home/raphy/llama.cpp/.venv/lib/python3.12/site-packages (from huggingface-hub~=0.23.0->ggify==0.1.0) (2024.12.0) Requirement already satisfied: packaging>=20.9 in /home/raphy/llama.cpp/.venv/lib/python3.12/site-packages (from huggingface-hub~=0.23.0->ggify==0.1.0) (24.2) Requirement already satisfied: pyyaml>=5.1 in /home/raphy/llama.cpp/.venv/lib/python3.12/site-packages (from huggingface-hub~=0.23.0->ggify==0.1.0) (6.0.2) Requirement already satisfied: requests in /home/raphy/llama.cpp/.venv/lib/python3.12/site-packages (from huggingface-hub~=0.23.0->ggify==0.1.0) (2.32.3) Requirement already satisfied: typing-extensions>=3.7.4.3 in /home/raphy/llama.cpp/.venv/lib/python3.12/site-packages (from huggingface-hub~=0.23.0->ggify==0.1.0) (4.12.2) Requirement already satisfied: charset-normalizer<4,>=2 in /home/raphy/llama.cpp/.venv/lib/python3.12/site-packages (from requests->huggingface-hub~=0.23.0->ggify==0.1.0) (3.4.1) Requirement already satisfied: idna<4,>=2.5 in /home/raphy/llama.cpp/.venv/lib/python3.12/site-packages (from requests->huggingface-hub~=0.23.0->ggify==0.1.0) (3.10) Requirement already satisfied: urllib3<3,>=1.21.1 in /home/raphy/llama.cpp/.venv/lib/python3.12/site-packages (from requests->huggingface-hub~=0.23.0->ggify==0.1.0) (2.3.0) Requirement already satisfied: certifi>=2017.4.17 in /home/raphy/llama.cpp/.venv/lib/python3.12/site-packages (from requests->huggingface-hub~=0.23.0->ggify==0.1.0) (2024.12.14) Building wheels for collected packages: ggify Building editable for ggify (pyproject.toml) ... done Created wheel for ggify: filename=ggify-0.1.0-py3-none-any.whl size=2470 sha256=1af72f869ad4aa06903b43363d6b24936e20d3f6d6ff4fd24f02ae20549f3fb8 Stored in directory: /tmp/pip-ephem-wheel-cache-puwwffg0/wheels/d4/8a/0f/3aad122340f9d1c799835b0384de5acae2335ecfae16236131 Successfully built ggify Installing collected packages: ggify Attempting uninstall: ggify Found existing installation: ggify 0.1.0 Uninstalling ggify-0.1.0: Successfully uninstalled ggify-0.1.0 Successfully installed ggify-0.1.0
When trying ti convert this https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-italian pytorch model into ggml I get : Could not find a converter error
Could not find a converter
(.venv) raphy@raohy:~/llama.cpp/ggify$ python3 ggify.py jonatasgrosman/wav2vec2-large-xlsr-53-italian README.md: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 5.53k/5.53k [00:00<00:00, 14.3MB/s] alphabet.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 266/266 [00:00<00:00, 844kB/s] config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████| 1.78k/1.78k [00:00<00:00, 4.32MB/s] eval.py: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 6.20k/6.20k [00:00<00:00, 15.4MB/s] flax_model.msgpack: 100%|████████████████████████████████████████████████████████████████████████████████████████████| 1.26G/1.26G [00:11<00:00, 111MB/s] full_eval.sh: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 1.37k/1.37k [00:00<00:00, 4.58MB/s] (…)common_voice_6_0_it_test_predictions.txt: 100%|████████████████████████████████████████████████████████████████████| 882k/882k [00:00<00:00, 2.91MB/s] (…)voice_6_0_it_test_predictions_greedy.txt: 100%|████████████████████████████████████████████████████████████████████| 882k/882k [00:00<00:00, 2.92MB/s] (…)ion_common_voice_6_0_it_test_targets.txt: 100%|████████████████████████████████████████████████████████████████████| 884k/884k [00:00<00:00, 22.9MB/s] (…)2_dev_data_it_validation_predictions.txt: 100%|████████████████████████████████████████████████████████████████████| 124k/124k [00:00<00:00, 10.2MB/s] (…)ata_it_validation_predictions_greedy.txt: 100%|████████████████████████████████████████████████████████████████████| 124k/124k [00:00<00:00, 1.25MB/s] (…)ty-v2_dev_data_it_validation_targets.txt: 100%|████████████████████████████████████████████████████████████████████| 122k/122k [00:00<00:00, 12.2MB/s] (…)ommon_voice_6_0_it_test_eval_results.txt: 100%|█████████████████████████████████████████████████████████████████████| 50.0/50.0 [00:00<00:00, 327kB/s] (…)oice_6_0_it_test_eval_results_greedy.txt: 100%|█████████████████████████████████████████████████████████████████████| 49.0/49.0 [00:00<00:00, 164kB/s] preprocessor_config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████| 262/262 [00:00<00:00, 886kB/s] pytorch_model.bin: 100%|████████████████████████████████████████████████████████████████████████████████████████████| 1.26G/1.26G [00:29<00:00, 42.3MB/s] special_tokens_map.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████| 85.0/85.0 [00:00<00:00, 294kB/s] (…)_dev_data_it_validation_eval_results.txt: 100%|█████████████████████████████████████████████████████████████████████| 49.0/49.0 [00:00<00:00, 180kB/s] (…)ta_it_validation_eval_results_greedy.txt: 100%|█████████████████████████████████████████████████████████████████████| 49.0/49.0 [00:00<00:00, 250kB/s] vocab.json: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 410/410 [00:00<00:00, 1.35MB/s] vocab.json (0 MB): 100%|███████████████████████████████████████████████████████████████████████████████████████████████| 21/21 [00:48<00:00, 2.30s/file] Traceback (most recent call last): File "/home/raphy/llama.cpp/ggify/ggify.py", line 114, in convert_pth stat = os.stat(model_path) ^^^^^^^^^^^^^^^^^^^ FileNotFoundError: [Errno 2] No such file or directory: './models/jonatasgrosman__wav2vec2-large-xlsr-53-italian/ggml-model-f32.gguf' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/raphy/llama.cpp/ggify/ggify.py", line 313, in <module> main() File "/home/raphy/llama.cpp/ggify/ggify.py", line 298, in main output_paths = list( ^^^^^ File "/home/raphy/llama.cpp/ggify/ggify.py", line 189, in convert_pth_to_types nonquantized_path = convert_pth( ^^^^^^^^^^^^ File "/home/raphy/llama.cpp/ggify/ggify.py", line 135, in convert_pth raise ToolNotFoundError("Could not find a converter") ToolNotFoundError: Could not find a converter
Python 3.12.3 pip 24.0
How to make it work?
The text was updated successfully, but these errors were encountered:
No branches or pull requests
When trying ti convert this https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-italian pytorch model into ggml I get :
Could not find a converter
errorPython 3.12.3
pip 24.0
How to make it work?
The text was updated successfully, but these errors were encountered: