This project focuses on optimizing race strategies (pit stops, tire choices, fuel management, etc.) using Reinforcement Learning (RL).
To train RL agents effectively, the system first generates synthetic race simulations based on telemetry data collected from the Le Mans Ultimate racing simulator, allowing for controlled and repeatable experiments.
Modern motorsport strategy relies heavily on data-driven decision making.
The goal of this project is to:
- Process telemetry data from the Le Mans Ultimate simulator (lap times, tire wear, fuel consumption, weather, collision impacts, etc.).
- Train neural network models to reproduce realistic race dynamics and generate synthetic race sequences.
- Use these simulations as an environment for RL algorithms, enabling agents to learn optimal race strategies without the cost of real-world or in-game testing.
- Data ingestion from Le Mans Ultimate telemetry logs.
- Preprocessing and scaling of continuous race data.
- Long short-term memory(LSTM) model for predicting next race states and generating synthetic race runs.
- Extend the simulator with probabilistic events (e.g., car damage from collisions).
- Integrate a Reinforcement Learning environment to allow agents to optimize strategies (pit stops, tire/fuel management).
- Test/fine-tune RL model directly with LMU
- Python, PyTorch, Scikit-learn – neural network training and inference
- NumPy, SQlite – data handling and preprocessing
- Matplotlib – visualization
This simulator provides a safe, data-driven playground for experimenting with race strategy optimization.
By combining machine learning with reinforcement learning, it creates an adaptable environment for testing strategies that would be costly or time-consuming to explore even in a racing simulator.