A full‑stack application to help users discover the best GPUs based on region, budget, and usage timeline, and to interact with a research‑paper–powered chatbot for detailed insights.
Video - https://drive.google.com/file/d/1YuwXrVpLt-1kdGy2C1AQuFUaU68KXBgv/view?usp=sharing
Documentation - https://drive.google.com/file/d/1yUdIYdmQZP2WEUQSlfVG41omk_jmYKGc/view?usp=sharing
The recommendation engine backend is deployed on Render:
https://gpu-recommender.onrender.com/api/recommend-gpus
This API returns the top 5 GPUs for given parameters (region, operating system, timeline, budget).
The Streamlit frontend combines three interactive steps:
-
Step 1: Fetch & Clean GPU Names
- Collects user inputs (region, OS, timeline, budget)
- Calls the backend API and displays cleaned GPU names
-
Step 2: GPU Recommendation (LLM-based)
- Uses LangChain + Groq LLM to generate short 2–3 line recommendations for each GPU
-
Step 3: PDF ChatBot
- Loads research papers from a local
research_papers/
directory - Answers user questions with source snippets
- Loads research papers from a local
project-root/
├── backend/ # (Optional local copy of API code)
├── frontend/ # Streamlit client
│ ├── main.py # Entry point for Streamlit
│ ├── research_papers/ # PDF files for chatbot context
│ └── .env # GROQ_API_KEY, HF_TOKEN
└── README.md
-
Clone the repo
git clone <repository-url> cd project-root
-
Install dependencies
pip install -r frontend/requirements.txt
-
Set up environment
Create a
.env
file insidefrontend/
:GROQ_API_KEY=<your_groq_api_key> HF_TOKEN=<your_huggingface_token>
-
Run the Streamlit App
cd frontend streamlit run main.py
This will open the app in your default browser at http://localhost:8501
.
-
Backend
- Python, FastAPI (or Flask)
- Hosted on Render.io
-
Frontend
- Streamlit
- Requests for API calls
- LangChain + ChatGroq for LLM integrations
- FAISS for vector search over PDFs
- SentenceTransformers for embeddings
-
Research Papers
- Local
research_papers/
directory - RecursiveCharacterTextSplitter for chunking
- Local
- Dynamic top‑5 GPU recommendation by region, OS, timeline, and budget
- Cleaned, human‑readable GPU names
- Short, LLM‑generated usage recommendations
- Interactive PDF‑based chatbot with snippet‑level context
- Easy deployment with Render (backend) and Streamlit (frontend)
- Backend: ensure
RENDER
environment and any API keys are set on Render.io - Frontend: configure
.env
and runstreamlit run main.py
Feel free to open issues or submit pull requests for improvements!
File to run: frontend/main.py
Happy GPU‑hunting and research exploration! 🎉