Consider giving this repo a ✨! Thanks!!!
Here's a link to the YouTube video explaining this setup in greater detail:
You need to have the following tools installed:
# To setup poetry
poetry install
Note: You can use any OpenAI compatible backend or simply use OpenAI itself.
- Feel free to use any OpenAI compatible API.
- Make sure you have Ollama installed.
- Make sure you have pulled a model. I recommend Qwen 2.5 32B.
- Use this guide to setup inferix to host a OpenAI compatible API capable of function calling.
OPENAI_API_KEY
OPENAI_API_BASE_URL
You'll probably have to change the models being used everywhere as well
We've got a few scripts in this project:
poetry run 01
- Making a simple LLM call using theopenai
sdk.poetry run 02
- Demonstration of how in-context learning can really personalize AI responses.poetry run 03
- Demonstration of how vector/similarity search works.poetry run 04
- This script will fail. The chunks generated are too large for embedding to work.poetry run 05
- A basic RAG pipeline with recursive text splitting.poetry run 06
- A slightly better RAG pipeline using Markdown based splitting.