You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Run your local LLM (**llama3**, **gemma**, **mistral**, **phi3**)or use cloud models (**Groq/Llama3**, **OpenAI/gpt4-o**)
5
+
Run local LLMs (**llama3**, **gemma**, **mistral**, **phi3**), custom LLMs through **LiteLLM**, or use cloud models (**Groq/Llama3**, **OpenAI/gpt4-o**)
6
6
7
7
Demo answering questions with phi3 on my M1 Macbook Pro:
8
8
@@ -27,6 +27,7 @@ Please feel free to contact me on [Twitter](https://twitter.com/rashadphz) or [c
27
27
-[x] Docker deployment setup
28
28
-[x] Add support for [searxng](https://github.com/searxng/searxng). Eliminates the need for external dependencies.
29
29
-[x] Create a pre-built Docker Image
30
+
-[x] Add support for custom LLMs through LiteLLM
30
31
-[ ] Chat History
31
32
-[ ] Chat with local files
32
33
@@ -46,6 +47,7 @@ Please feel free to contact me on [Twitter](https://twitter.com/rashadphz) or [c
46
47
- Search with multiple search providers (Tavily, Searxng, Serper, Bing)
47
48
- Answer questions with cloud models (OpenAI/gpt4-o, OpenAI/gpt3.5-turbo, Groq/Llama3)
48
49
- Answer questions with local models (llama3, mistral, gemma, phi3)
50
+
- Answer questions with any custom LLMs through [LiteLLM](https://litellm.vercel.app/docs/providers)
49
51
50
52
## 🏃🏿♂️ Getting Started Locally
51
53
@@ -144,6 +146,9 @@ SEARCH_PROVIDER=bing
144
146
# Cloud Models
145
147
OPENAI_API_KEY=...
146
148
GROQ_API_KEY=...
149
+
150
+
# See https://litellm.vercel.app/docs/providers for the full list of supported models
0 commit comments