Recent advances in machine learning (ML), known as deep neural networks (DNN) or deep learning, have greatly improved the state-of-the-art for many ML tasks, such as image classification (He, Zhang, Ren, & Sun, 2016; Krizhevsky, Sutskever, & Hinton, 2012; LeCun, Bottou, Bengio, & Haffner, 1998; Szegedy et al., 2015; Zeiler & Fergus, 2014), speech recognition (Graves, Mohamed, & Hinton, 2013; Hannun et al., 2014; Hinton et al., 2012), complex games and learning from simple reward signals (Goodfellow et al., 2014; Mnih et al., 2015; Silver et al., 2016), and many other areas as well. Neural network and ML methods have been applied to the task of autonomously controlling a vehicle with only a single camera image input to successfully navigate on the road (Bojarski et al., 2016). However, advances in deep learning are not yet applied systematically to this task. In this work I used a simulated environment in order to implement and compare several methods for controlling autonomous navigation behavior using a single standard camera input device to sense the environmental state. The simulator contained a simulated car with a camera mounted on the top to gather visual data while being operated by a human controller on a virtual driving environment. The gathered data were used to perform supervised training for building an autonomous controller to drive the same vehicle remotely over a local connection. I reproduced results from previous researchers who have used simple neural networks and other ML techniques to guide similar test vehicles using a single camera. I compared these results with more complex deep neural network controllers to see if they can improve navigation performance based on past methods on measures of speed, distance, and other performance metrics on unseen simulated road driving tasks.
展开▼