Welcome to an end-to-end machine learning dashboard app built for model evaluation and deployment — built for a hackathon, designed for production!
🔹 A powerful and stylish Streamlit dashboard for comparing multiple classification models.
🔹 Real-time model testing with uploaded CSVs.
🔹 Fully tuned pipelines, metrics analysis, and interactive visualizations.
The purpose of this project is to:
- Train and evaluate multiple classification algorithms
- Use cross-validation and hyperparameter tuning for optimization
- Compare models based on metrics like:
- Accuracy
- AUC Score
- F1-Score
- Precision, Recall, Specificity
- Visualize and interpret results through an interactive Streamlit dashboard
- Enable end-users to upload their own CSV and get predictions from tuned models.
- ✅ Logistic Regression
- ✅ Decision Tree Classifier
- ✅ Random Forest Classifier
- ✅ Support Vector Machine
- ✅ XGBoost / LightGBM
- ✅ Hyperparameter Tuning (Grid Search)
- ✅ Feature Importance Charts
- ✅ Dynamic Bar Graphs (Plotly)
- ✅ Glassmorphic Streamlit UI
- ✅ Upload CSV to Test Models Live
- ✅ Auto-Pickle & Save All Models
- ✅ Responsive layout with dark mode and Fira Code font
Dataset is sourced from Kaggle. After selection:
- Null values handled
- Categorical features encoded
- Numeric features scaled
- Train/test split applied with stratification
Example input format is available in example_input.csv
.
git clone https://github.com/yourusername/your-repo-name.git
cd your-repo-name
python -m venv venv
source venv/bin/activate # or venv\Scripts\activate on Windows
pip install -r requirements.txt
streamlit run app.py
Make sure
models/
folder exists with pickled files
- Used
Pipeline()
fromsklearn
for each model - Feature encoding, scaling, and classification in one step
Metric | Description |
---|---|
Accuracy | % Correct predictions |
AUC Score | Area under ROC curve |
F1-Score | Harmonic mean of precision & recall |
Precision | True Positives / Predicted Positives |
Recall | True Positives / Actual Positives |
Specificity | True Negatives / Actual Negatives |
- GridSearchCV for exhaustive tuning
- Best parameters auto-selected for each model
- Theme toggle: Light ✨ / Dark 🌚
- Plotly-based interactive charts
- Hover effects, rounded corners, and modern Fira Code font
- Upload
.csv
file to test any tuned model - Performance chart comparison between Accuracy and AUC
Easily deploy on Streamlit Cloud:
https://share.streamlit.io/yourusername/your-repo-name/main/app.py
You can also deploy via:
- Hugging Face Spaces
- Render.com
- Local containerized environments (Docker)
- Run
app.py
- Upload a CSV following the
example_input.csv
format - Select any model from the sidebar dropdown
- Visualize predictions, performance, and insights
Dashboard View | Feature Importances |
---|---|
![]() |
![]() |
Model | Accuracy | AUC Score | F1 Score | Recall | Specificity |
---|---|---|---|---|---|
Random Forest | 0.92 | 0.94 | 0.91 | 0.90 | 0.93 |
XGBoost | 0.91 | 0.95 | 0.90 | 0.89 | 0.92 |
MIT License © 2025 Debangan Ghosh
Star ⭐ the repo if you liked the project. Contributions, feedback and forks are always welcome!
Connect with me on LinkedIn or drop an issue if you want to collaborate!