Late last year, when my trusty Altima of over 15 years started needing coolant refills more often than gas refills, I knew it was finally time to start looking for a new car. While many mid-range automobiles now come standard with impressive technological features, I have always been especially intrigued with electric vehicles, and the technology behind self-driving cars. I took note of Tesla’s Model S when it was introduced in 2012, and while the raw power and feature set were exciting, the price tag was well out of my range. However, with the introduction of the Model 3 in 2017, the prospect of having this power and technology at my fingertips was much more real. After taking a Model 3 for a test drive and getting the chance to explore some of the features for myself, I was sold. I placed an order and received my new Model 3 about a month later, and I haven’t looked back.
Driving is something I look forward to now, not just because of the incredible acceleration of my Model 3 (0-60 in 4.4 seconds), but due to the truly impressive feature set that is included. The Autopilot package allows the vehicle to manage highway driving from on-ramp to off-ramp, including adaptive cruise control, and automatic handling of lane changes and interchanges. This post highlights the AI operating behind the scenes to allow these advanced capabilities.
Self-driving cars are classified by the Society of Automotive Engineers (SAE) and National Highway Traffic Safety Administration (NHTSA) into five levels, where Level 1 vehicles include only basic driver assistance features such as cruise control, Level 2 vehicles implement partial automation but require driver engagement and monitoring (Tesla’s vehicles fall into this category), Level 3 vehicles do not require the driver to monitor the environment, Level 4 vehicles are capable of performing all driving functions under certain conditions, and Level 5 vehicles can perform all driving functions under all conditions.
To be able to decide their next move, vehicles first use a combination of sensors to build a continuous 360° view of the car’s surroundings. The Tesla Model 3 uses a series of 8 cameras and 12 ultrasonic sensors to build this picture, and this map of the vehicle’s surroundings is then used by a Computer Vision system to allow the system to understand the car’s surroundings. This is where Tesla’s Convolutional Neural Network comes into play. This neural network is used to perform object detection, lane detection, and image classification for everything in the car’s immediate vicinity, including other vehicles, traffic signals, pedestrians, and even traffic cones. All this information is processed in real-time, and the outputs from this processing are then sent to the control stage of the vehicle. The control stage determines whether to accelerate or decelerate, and exactly how much to turn the wheels to follow the rules of the road while watching out for obstacles.
The real magic is the fact that the neural networks are continually trained on the data being sent by all cars in the Tesla fleet, allowing the system to improve over time. Object recognition will get better, and the vehicles will have a better understanding of how to react in various situations. And these improvements regularly get pushed to all of Tesla’s vehicles via software updates that can be downloaded and installed wirelessly, much in the same way updates get pushed to smartphones. With the optional Full Self-Driving package comes the promise of increasingly autonomous driving on surface roads as improvements are made over time using the data collected by cars in the Tesla fleet.
The process of continuing to train these neural networks is emblematic of the iterative nature of Data Science. Ongoing, iterative training and improvement of machine learning models and neural networks using additional data allows additional features to be added as they become available. A larger, more diverse training dataset can significantly improve the accuracy of the results. In the case of self-driving automobiles, this iterative improvement process has over time resulted in a system that can skillfully direct a vehicle along its route, avoid potential hazards, and follow the rules of the road.
In fact, just earlier this year one of these software updates added a feature that I absolutely love – when it rains, the Model 3 uses the front-facing camera mounted inside the car to recognize water on the windshield and activates the wipers accordingly. This feature can be supplemented by manually pressing a button on the turn signal stalk to activate the wipers if they are not keeping up. With this software update, these manual button press overrides are now being sent to Tesla and used to train a neural network to improve the recognition of water on the windshield and thus the efficacy of the automatic windshield wiper functionality. And now every time I coax my windshield wipers to keep up with the rain, I smile knowing that I am helping to train a neural network with the push of a button.
Bob Wood is a Data Analyst at Pandata.