MORE | Fall 2020
Simulation Framework for Driving Data Collection and Object Detection Algorithms to Aid Autonomous Vehicle Emulation of Human Driving Styles
Identifying objects and estimating the volume enclosed by these objects is a critical index in determining the viability of autonomous vehicles (AVs) and is crucial for inculcating safety measures. Including the camera data, exploiting valuable information from high precision instruments like LiDAR has proven to be significantly effective in achieving 3D object detection and classification. This project is a sincere effort towards developing an algorithm for sensor fusion. This approach harnesses point cloud information of LiDAR in conjunction with RGB images captured by cameras on an AV. The Neural Network was trained on a Synthetic Dataset prepared by the researcher and was tested on a custom made map ingested into CARLA simulator. ‘Xception’ network (keras) was chosen to be the most compatible one among other available networks for Transfer learning.
Hometown: Tempe, Arizona, United States
Graduation date: Fall 2020