A RAG-based interview assistant platform that generates accurate answers during interview processes, similar to ntro.io.
- Conversation History Answers: Generates context-aware answers based on previous conversation
- Resume-Based Knowledge Base: Uses interviewee's resume and profile to provide personalized answers
- Pre-Prepared Answer Vault: Provides curated answers for difficult questions (STAR stories, playbooks)
- Continuous Learning: Improves through recommendation system based on interview data
See runtime-diagram.md for the complete runtime flow diagram.
See offline-diagram.md for the continuous learning system.
See complete-architecture.md for detailed system design and implementation guide.
- Python 3.10+
- Node.js 18+
- PostgreSQL (or SQLite for development)
- OpenAI API key (or Anthropic API key, or use free sentence-transformers)
- Navigate to backend:
cd backend- Create virtual environment:
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate- Install dependencies:
pip install -r ../requirements.txt- Configure environment:
# Create .env file in backend/ directory
cp .env.example .env
# Edit .env and add your API keys- Run the backend:
python main.pyThe API will be available at http://localhost:8000
- Navigate to frontend:
cd frontend- Install dependencies:
npm install- Run the frontend:
npm run devThe frontend will be available at http://localhost:3000
.
├── backend/
│ ├── app/
│ │ ├── api/ # API endpoints
│ │ ├── core/ # Configuration and database
│ │ ├── models/ # Database models
│ │ └── services/ # Business logic services
│ └── main.py # FastAPI application
├── frontend/
│ ├── src/
│ │ ├── components/ # React components
│ │ └── App.jsx # Main app component
│ └── package.json
├── runtime-diagram.md # Runtime architecture diagram
├── offline-diagram.md # Offline improvement diagram
├── complete-architecture.md # Full documentation
└── requirements.txt # Python dependencies
Once the backend is running:
- Swagger UI:
http://localhost:8000/docs - ReDoc:
http://localhost:8000/redoc
- Session Memory: Conversation history with timestamps
- Resume & Profile KB: User-specific information
- Prepared Answer Vault: STAR stories and playbooks
- Company/Role Pack: Job-specific context
- Router: Intelligently routes queries to knowledge sources
- Embedding Service: Converts text to vectors
- Vector Store: Manages ChromaDB collections
- LLM Service: Generates answers using GPT-4/Claude
- Guardrails: Validates and filters outputs
- Recommendation Engine: Analyzes logs for improvements
- Create a user
- Upload resume
- Add prepared answers
- Start an interview session
- Ask questions and get AI-generated suggestions
- System learns from your actions
- Backend: FastAPI, SQLAlchemy, ChromaDB
- Frontend: React, Vite
- LLM: OpenAI GPT-4 or Anthropic Claude
- Embeddings: OpenAI or sentence-transformers
- Vector DB: ChromaDB (can be swapped for Pinecone, Weaviate, etc.)
[Add your license here]
[Add contribution guidelines]