-
-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Android: Switch default library used for Whisper voice typing #11881
Android: Switch default library used for Whisper voice typing #11881
Conversation
than hardcoded with Joplin)
|
||
### Downloading the models | ||
|
||
By default, Joplin downloads Whisper models from [this GitHub repository](https://github.com/personalizedrefrigerator/joplin-voice-typing-test/releases). It's possible to download models from a custom location by changing the **Voice typing language files (URL)** in from the "Note" tab of the configuration screen. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By default, Joplin downloads Whisper models from [this GitHub repository](https://github.com/personalizedrefrigerator/joplin-voice-typing-test/releases). It's possible to download models from a custom location by changing the **Voice typing language files (URL)** in from the "Note" tab of the configuration screen. | |
By default, Joplin downloads Whisper models from [this GitHub repository](https://github.com/personalizedrefrigerator/joplin-voice-typing-test/releases). It's possible to download models from a custom location by changing the **Voice typing language files (URL)** in from the "Note" tab of the configuration screen. |
It may make sense to change this URL to something under the joplin
GitHub organization.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes if we could move them to a repository under github.com/joplin before the final 3.3 release that would be better. If you need access for this please let me know
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you need access for this please let me know
I currently don't have the ability to create new repositories in https://github.com/joplin/.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've added you to the org now and created the repository https://github.com/joplin/voice-typing-models Let me know if you're able to create releases there as maybe I need to add you to the repository too
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for creating that!
maybe I need to add you to the repository too
At present, I don't seem to have permission to commit to or create releases in joplin/voice-typing-models
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've changed you to admin, please give it another try
Summary
whisper.cpp library | demo APK
This pull request switches from
onnx-runtime
towhisper.cpp
. Rather than addwhisper.cpp
as a submodule (as many of the language bindings/libraries that use it do), its code is copied toapp-mobile/android/vendor/
.This library has a few benefits over
onnx-runtime
:onnx-runtime-extensions
when starting voice typing have been observed on 32-bit Android devices.tiny
(must change the model URL in settings).Note
At present, this pull request:
To-do
For this pull request:
.onnx
files when the user deletes and redownloads local models.Optional/for a follow-up pull request:
ggml-tiny-q8_0
model is currently used on all devices. On faster devices, theggml-base-q8_0
model should have better performance.whisper.cpp
Android example has custom logic for determining the number of threads used to run Whisper. Should Joplin do something similar?Important
Huge pull request diff: At present, this pull request includes the code of
whisper.cpp
in avendor/whisper.cpp
folder. This is instead of addingwhisper.cpp
as a submodule.Testing
Automated tests
This pull request includes automated tests for:
findLongestSilence.cpp
, infindLongestSilence_test.cpp
:whisper.ts
, inwhisper.test.ts
:whisper_tiny.onnx
models (used by Joplin before this pull requests) are cleared when deleting and re-downloading models.