Data Annotation | Data Collection | Data Licensing | AI, ML, Tech. Consulting | Advanced & Tech. LLM | RPO & Staffing

How Autonomous Vehicles Work & How That Could Mean the End of Accidents Forever

We believe smart driving automation & intelligent safety assistance may hold the potential to end road accidents – perhaps forever!

As per the World Health Organization, road accidents are responsible for the death of approximately 1.35 million people each year. Moreover, about 20-50 million people suffer non-fatal injuries in road accidents.

As per a new report today, around 71 per cent of 4.49 lakh road accidents in India last year, were due to over speeding.

Considering the significant losses caused due to road accidents, the governments & automotive companies across the globe have been working towards reducing road fatalities. But much of those efforts have been regulatory in nature or in the area of reducing the damage an accident causes.

Now, we have the opportunity to bring preventive measures into play.

Technology is at the forefront of these efforts. The development & adoption of autonomous vehicles (AVs) and advanced driver assistance systems (ADAS) is a step in the right direction. Of course, many major automotive manufacturers are already conducting extensive on-road testing of AVs or self-driving cars. Big players like Tesla, Uber, and Waymo are extensively testing the implementation of autonomous vehicles. There are already thousands of AVs on road and this number will only increase over time. As per Boston Consulting Group, there will be about 12 million AVs by 2035.

But what technologies are utilized in AVs & ADAS? What are the challenges & how can AVs & ADAS end road accidents?

Let’s look at all these aspects.

Understanding AVs – How AVs work?

AVs make use of several systems to replace a human driver. AVs rely on a set of sensors to view the environment, software to process the data & actuators to make automated decisions & act. For gathering data about the environment, AVs use object detection sensors to perceive a vehicle’s direct environment. These sensors can be passive, which capture the data in the environment, or active, that send their own electromagnetic signals & then sense its reflection. SONAR, RADAR &  LIDAR are examples of active sensors.

Learning the environment

Using the data captured through various sensors, AVs construct a map of its environment while keeping track of its location in it. An AV needs to continuously sense its environment & build an environment map for itself. This process is known as Simultaneous Localization and Mapping (SLAM). Once a SLAM algorithm lets an AV know its location on its map, the vehicle can start planning which path to take.

SLAM is a complex process & is fundamental to AVs. To perform SLAM with more accuracy, sensor fusion, i.e the process of combining data from various sensors & databases is key.

Machine learning algorithms

AVs use different machine learning algorithms for multiple applications. Common machine learning methodologies used in AVs are CNN (convolutional neural network), RNN (recurrent neural network) &  DRL (deep reinforcement learning). CNNs are used to process images & spatial information to identify objects in the environment. RNNs are particularly useful when working with temporal information like videos. DRL is a goal-oriented algorithm that can learn how to attain an objective.

These algorithms help map a set of inputs to a set of outputs, based on the training data provided. This makes the role of training data extremely crucial to AV operation.

Simulation & training of Autonomous Vehicles

AS mentioned, the machine learning algorithms need to be comprehensively trained on datasets representing the range of realistic scenarios. In the ADAS development process, a huge volume of data is acquired from the fleet of sensors. This data is then used for training ADAS models. To get AVs quickly on public roads, we need a huge amount of training data & this data must be annotated by experts.

Data annotation is the process of classifying each object in the frame captured by an AV. To create machine learning models that will stand the test of the real world, this data needs to be clearly, accurately & comprehensively annotated. This is the foundation of the ability of the machine learning algorithm’s ability to decipher inputs and take appropriate actions when out in the real world.

Automated decisions

Based on all the information gathered using sensors & its processing using various algorithms, AVs become capable of making autonomous decisions. In regular human-driven cars, the vehicle’s actions like steering, braking, or signaling are controlled by a driver. The mechanical signals from the driver are translated by an electronic control unit (ECU) into actuation commands.

These commands are executed by electric or hydraulic actuators on a vehicle. In autonomous vehicles, this functionality is replaced by drive control software which directly communicates with an ECU. These self-driven vehicles contain multiple ECUs. These ECUs are simple computing units with their own memory & microcontroller.

So, how about that question about preventing accidents?

As per the US Transportation’s National Highway Traffic Safety Administration (NHTSA), about 94% of severe traffic accidents are because of human errors. There is all manner of human factors & fragilities to consider:

1.     Errors are caused by distracted drivers who may not notice relevant stimuli in the environment (like traffic, road conditions, or unwary pedestrians) that could lead to accidents later. With AVs & ADAS, this problem is addressed comprehensively. Appropriately trained ML algorithms will not fail to account for such environmental factors while making their decisions. Fewer errors mean fewer accidents.

2.     Some accidents are caused by environmental factors that are beyond the capacity of human drivers to prevent. Think about pileups on the highway in foggy conditions, for instance. Now factor in the ability of a vehicle equipped with active sensors to become aware of such conditions & take preventive actions appropriately in time.

3.     Tired or sleepy drivers who lose control of their vehicles are also a key factor in accidents, especially on the highways. Of course, the intelligent systems in the AVs don’t get tired & always function at their rated design efficiency ruling out such accidents.

4.     With human-only drivers, there exists the possibility that they may drive while unaware of some potentially serious problem in their vehicle. For instance, there could be a problem with the braking system or maybe a worn-out tire liable to a blowout. These problems could cause the driver to lose control of the vehicle & lead to accidents later. With smarter automated systems in control of the car, such health issues will be noticed & flagged for attention before such problems occur.

5.     Of course, many accidents are caused by drivers doing exactly what they want to do, even if doing so is unwise. Speeding, driving while under the influence or while incapacitated, or driving unsafely are all caused by irresponsible human behavior. Many smart vehicles will be “trained” to look for signs of such behavior & apply remedial measures.

Based on our understanding of the technology used & the effort that goes into ensuring the safety of autonomous vehicles, it is easy to take the stand that AVs & ADAS have the potential to end road accidents. It seems pretty clear that if the AV or the ADAS machine learning algorithms & intelligent systems are trained well, more or less all then major causes of accidents can be eliminated. That done, there’s really no reason why any accidents should occur, right?