Driverless car systems have a bias problem, according to a new study from Kings College London. The study examined eight AI-powered pedestrian detection systems used for autonomous driving research. Researchers ran more than 8,000 images through the software and found that the self-driving car systems were nearly 20% better at detecting adult pedestrians than kids, and more than 7.5% better at detecting light-skinned pedestrians over dark-skinned ones. The AI were even worse at spotting dark-skinned people in low light and low settings, making the tech even less safe at night.

For children and people of color, crossing the street could get more dangerous in the near future.

Advertisement

“Fairness when it comes to AI is when an AI system treats privileged and under-privileged groups the same, which is not what is happening when it comes to autonomous vehicles,” said Dr. Jie Zhang, one of the study authors, in a press release. “Car manufacturers don’t release the details of the software they use for pedestrian detection, but as they are usually built upon the same open-source systems we used in our research, we can be quite sure that they are running into the same issues of bias.”

The study didn’t test the exact same software used by driverless car companies that already have their products on the streets, but it adds to growing safety concerns as the cars become more common. This month, the California state government gave Waymo and Cruise free range to operate driverless taxis in San Francisco 24-hours a day. Already, the technology is causing accidents and sparking protests in the city.

Advertisement

Advertisement

Cruise, Waymo, and Tesla, three of the companies best-known for self-driving cars, did not immediately respond to requests for comment.

According to the researchers, a major source of the technology’s problems with kids and dark-skinned people comes from bias in the data used to train the AI, which contains more adults and light-skinned people.

Algorithms reflect the biases present in datasets and the minds of the people who create them. One common example is facial recognition software, which consistently demonstrates less accuracy with the faces of women, dark-skinned people, and Asian people, in particular. These concerns haven’t stopped the enthusiastic embrace of this kind of AI technology. Facial recognition is already responsible for putting innocent black people in jail.

Services MarketplaceListings, Bookings & Reviews

Entertainment blogs & Forums

El emporio de don cesar.