🚀 Feature: Guide on local LLM deployment #1241
Labels
docs
documentation
Improvements or additions to documentation
enhancement
New feature or request
hacktoberfest
help wanted
Extra attention is needed
🔖 Feature description
Finally I recently added a swappable base_url for openai client, thus if you configure docsgpt with LLM_NAME=openai
You can run any model you want locally with openai compatible servers. for example vllm
Or ollama
But please make sure you put something in api_key like not-key
Also make sure you specify correct MODEL_NAME that you have launched via methods above or any other.
Also you will need to specify OPENAI_BASE_URL, if you are running it locally its probably http://localhost:11434/v1 for ollama. Also be careful of running docsgpt inside container as you will need to point OPENAI_BASE_URL to an address accessible by it
🎤 Why is this feature needed ?
So people better understand how to use local llm
✌️ How do you aim to achieve this?
Someone can pick up this task to create guide in our docs
located in docs/ folder
🔄️ Additional Information
No response
👀 Have you spent some time to check if this feature request has been raised before?
Are you willing to submit PR?
None
The text was updated successfully, but these errors were encountered: