Practical companion to the IMT Machine Learning course.
This course teaches how to design, run, and audit machine learning experiments in a research-grade setting. We move from a minimal training loop to a structured, reproducible experiment pipeline.
The focus is on:
- reproducibility,
- controlled comparisons,
- structured logging,
- and experimental rigor.
-
🌍 Big Picture
Why ML experiments are hard today: scale, brittleness, infrastructure -
💻 Dev Setup in 2026
A minimal researcher stack: IDE + AI Assist, Git, environments, tracking -
🔁 Training Script (Vanilla)
Build the minimal loop: data → model → loss → optimizer → eval -
📊 Training Script (Research-Grade)
Make runs comparable: configs, logging, checkpoints, run grids, basic HPO -
🧾 (Optional) Working with Text
Run a tiny Transformer experiment (tokenization, batching, evaluation) -
⚡ (Optional) Hardware for ML
Scope experiments: VRAM/RAM/disk, throughput bottlenecks, GPU selection
| # | Section | Notebook |
|---|---|---|
| 1 | Data | 1_data.ipynb |
| 2 | Model | 2_model.ipynb |
| 3 | Optimizer + Loss | 3_optimizer_and_loss.ipynb |
| 4 | Training Loop | 4_training_loop.ipynb |
| 5 | Training Script | 5_training_script.ipynb |
| 6 | Transformers | 6_transformers.ipynb |
train_mnist.py- single reproducible runrunner_simple.py- programmatic run launcherrunner_full.py- small run grid schedulerhp_opt.py- minimal random hyperparameter search
These illustrate the progression:
One run
↓
Configurable script
↓
Run grid (seed × hyperparameters)
↓
Structured hyperparameter search
All materials are self-contained and runnable locally (CPU or single GPU).
📍 Location
IMT School for Advanced Studies Lucca
San Francesco Complex
Classroom 2
🗓 Timetable
| Day | Date | Time |
|---|---|---|
| Friday | February 13, 2026 | 09:00–11:00 |
| Monday | February 16, 2026 | 09:00–11:00 |