Project Description
This project implemented end-to-end behavioral cloning for autonomous driving, learning a direct mapping from camera images to steering wheel commands. The model was trained on data collected over multiple laps of a simulated track, including off-track recovery maneuvers generated by human demonstration.
The approach demonstrates how imitation learning can be applied to complex control tasks, using deep neural networks to learn driving policies directly from visual input without explicit programming of driving rules.
Technical Approach
- • End-to-end learning from images to steering commands
- • Convolutional Neural Network (CNN) architecture
- • Data augmentation techniques for robustness
- • Recovery behavior training from off-track scenarios
- • Real-time inference in driving simulator
- • Data collection and preprocessing pipeline
Dataset and Training
The training dataset consisted of:
- • Multiple laps of normal driving behavior
- • Recovery maneuvers from various track positions
- • Data augmentation through image transformations
- • Balanced dataset to prevent bias toward straight driving
Results
The trained model successfully navigated the entire track autonomously, demonstrating smooth steering control and appropriate responses to track curvature. The behavioral cloning approach proved effective for learning complex visuomotor control policies, though it highlighted the importance of diverse training data and recovery behavior for robust performance.