-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Ollama as a Provider, allowing some Features to work with locally hosted LLMs #845
Conversation
…ge alt text and OCR
There's a few things that I've captured while working on this that I think would be good followups:
|
@vikrampm1 would you please help open issues for the 3 items above and drop into the 3.4.0 milestone? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks great and works really well, @dkotter! Thanks a lot for all your great work here.
if ( | ||
providerSettings?.models && | ||
! Array.isArray( providerSettings.models ) | ||
) { | ||
for ( const [ key, value ] of Object.entries( | ||
providerSettings.models | ||
) ) { | ||
models.push( { label: value, value: key } ); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Optional note: How about loading models by making an API call if provider settings don't have models saved or the endpoint URL is updated by the user in settings? This way, we don't need to save settings to populate models during first-time setup.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, sorry, missed this in fixing a few other things. I do think either a refresh button or maybe just always pulling in the current models when on the settings page would be a nice enhancement here. Would make it easier to see new models after you install them
Description of the Change
Ollama allows you to easily run various LLMs on your own computer. This provides a couple key benefits:
This PR integrates Ollama with a number of our existing Features, depending on which model you use:
If using a standard model:
If using a vision model:
If using an embedding model:
This allows you to test quite a few of the Features ClassifAI provides without any cost or data concerns. This could also be used on production sites, though worth noting a few downsides to running these models locally:
That said, there really isn't any reason you couldn't use Ollama as a Provider on an actual production site. You'd just need to ensure that any user that wants to use those Features has Ollama installed and configured on their individual computers.
Closes #772
How to test the Change
ollama pull llama3.1
as a base LLM;ollama pull nomic-embed-text
as the embedding LLM;ollama pull llava
as the vision LLM but can see all available models hereChangelog Entry
Credits
Props @dkotter
Checklist: