Self-driving cars systems have trouble detecting darker skin

0
266
Self-driving cars systems have trouble detecting darker skin
Self-driving cars systems have trouble detecting darker skin

We are slowly entering into the era of self-driving cars. Many companies like Tesla, Google, and Uber are achieving milestones to bring us closer to the day when we’ll sit back and relax while smart systems will automatically drive our cars. However, the risk factor is still involved, and the recent accidents by self-driving cars are indicative of the fact that we’re yet very far away from that day.

While tackling through obstructions and making instant decisions remain the major issues, researchers have found an issue with self-driving cars — poor detection of pedestrians having dark skin.

READ  Google Glass version 2: Enterprise Edition photos surface

A research conducted by Georgia Institute of Technology has concluded that self-driving systems are five percent less accurate in detecting dark skinned pedestrians.

The researchers began by analyzing the accuracy of decision making of state-of-the-art object-detection models by taking into consideration how such systems detect people from different demographic groups.

The dataset contained images of pedestrians and then further segregated it into images with pedestrians having different skin tones. This is done using the Fitzpatrick scale, which classifies humans according to human skin tones ranging from light to dark.

READ  2017 Acura NSX Configurator Released

The system showed biased decision making even when different variables like time of day in images and occasional obstruction of pedestrians were included in the study.

One of the reasons behind the ‘racist’ self-driving system is algorithmic bias. Algorithmic bias arises due to biased behavior of humans which then seeps into systems. The computer system imitates the behavior of its designers which impacts its decision-making capabilities.

The study does reveal an algorithmic bias towards dark-skinned people, but it is important to know that the research hasn’t been peer-reviewed and doesn’t use the actual object-detection model used by most autonomous vehicles manufacturers. The study is based on publicly available datasets used by academic researchers. Companies don’t make the actual data available, which is an issue in itself.

READ  Walmart testing drones in distribution centers

Nonetheless, the risks uncovered by the study are real, and it is high time that companies take some concrete steps to weed out biased behavior of computer systems to make them safer for everyone.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.