- EWC Community
AI Training Made Robots Become Racist and Sexist

By: Vivona Xu
A recent experiment has revealed astonishing results: robots can become racist and sexist if trained with certain artificial intelligence software.
The study found robots who was asked to put the “criminal” into the box repeatedly selected a black man’s face.
Researchers from Johns Hopkins University, Georgia Institute of Technology, and other institutions trained robots using the CLIP AI model.
They asked robots to scan blocks of people’s faces and classify the individuals in 62 boxes based on physical features.
The researchers then asked the robots to identify who was the criminal from sets of faces. Not only did the robots keep selecting a black man’s face as a “criminal”, but they identified women as “homemakers” and Latino men as “Janitors” over White males when the scientists asked similar questions. Women were also less likely to be identified as a doctor compared to men.
Scientists later pointed out the robots should not have responded to the request of identifying what the peoples’ profession was, because they were not given information about each human to classify them. Essentially, the task was a trick question in which the Robots failed.
The CLIP AI program the researchers put the robots through beforehand was a large language artificial intelligence model created and introduced to the public by OpenAI last year.
The software grew in popularity because it visually classifies objects by scraping billions of images and captions from the internet.
Researchers realized the program was flawed. It teaches robots to embrace unfair stereotypes which could create a terrible environment in future places they could be working at.
“The robot has learned toxic stereotypes through these flawed neural network models,” said author Andrew Hundt, a postdoctoral fellow at Georgia Tech who co-conducted the work as a PhD student working in Johns Hopkins’ Computational Interaction and Robotics Laboratory, “We’re at risk of creating a generation of racist and sexist robots, but people and organizations have decided it’s OK to create these products without addressing the issues.”
The CLIP AI was not the first software to incorporate biased teachings. Researchers have found numerous AI algorithms that target people of color and gender.
For instance, some facial recognitions have trouble identifying people of color, and there are tendencies of crime prediction algorithms to unfairly target Black and Latino people.
These programs pose a great risk of creating sexist and racist robots that could contaminate people’s beliefs.
Sources:
Robots became racist after AI training, always chose Black faces as ‘criminals’ (yahoo.com)
These robots were trained on AI. They became racist and sexist. (msn.com)