Afsvis AI is an advanced, offline, intelligent system that integrates DeepSeek R1 with Ollama for providing answers to user queries locally, without needing an internet connection. The system supports speech synthesis to read the answers aloud, enhancing the user experience.
- Local Q&A system with DeepSeek R1 model via Ollama.
- No internet required for question answering.
- Speech synthesis to read answers aloud using pyttsx3.
- Interactive terminal-based interface.
-
Clone the repository:
git clone https://github.com/afsify/afsvis-ai.git
To install Ollama (which is required to pull the DeepSeek model), follow these steps:
- Visit Ollama website and download the appropriate version for your system (Windows/Mac/Linux).
- Follow the instructions provided to complete the installation.
After installing Ollama, verify it by running the following command in your terminal:
ollama --version
Once Ollama is installed, you need to pull the DeepSeek R1:1.5b model. Open your terminal and run the following command:
ollama pull deepseek-r1:1.5b
This will download the DeepSeek R1 model locally to your machine.
You will need the following Python libraries:
- pyttsx3 (for text-to-speech)
- ollama (Python wrapper for Ollama)
To install these dependencies, run the following commands in your terminal:
pip install pyttsx3 ollama
After setting up the Python script, you can run the script directly from your terminal:
python main.py
This will:
- Send a query (e.g., "define multiverse?") to the DeepSeek R1 model.
- Stream the response and print it in the terminal.
- Read the response aloud using pyttsx3.
- Ollama not found: Ensure that Ollama is correctly installed and added to your system's PATH.
- Model not pulling: Ensure you have an active internet connection while pulling the model for the first time. Once pulled, the model is stored locally and does not require an internet connection.