top of page
  • EWC Community

AI May Have Racist and Sexist Biases



By: Bowie Zeng


Last month, a study led by institutions including Johns Hopkins University and the Georgia Institute of Technology showed that artificial intelligence systems used in robots may have racist and sexist biases.


In April, Amazon put $1 billion toward an innovation fund that is investing in robotics companies, including a team from the University of Washington and one from the Technical University of Munich, in Germany. In Germany, AI researchers use CLIP (a “large language” AI model created and unveiled at OpenAI last year) to train virtual robots. Engineers built the model by scraping together billions of images and captions from the Internet.


In the study, when researchers asked robots to identify images as “homemakers,” black and Latina women were more commonly selected than white men, the study showed. When the robots identified “criminals,” black men were chosen 9 percent more often than white men.


For “janitors,” images with Latino men were picked 6 percent more often than white men. Women were less likely to be identified as “doctors” than men, researchers found. In fact, scientists say, robots shouldn't be responding to such queries at all because they don't have enough information to make such judgments.


Andrew Hundt, a postdoctoral fellow from the Georgia Institute of Technology and lead researcher on the study, said this type of bias could have real-world implications. If robots train with biased AI models like CLIP, they may pick products advertised by men or by white people more often than other products. “That’s really problematic,” Hundt said.


However, Miles Brundage, head of policy research at OpenAI, said in a statement that the company has noticed issues with bias in the CLIP study, and he knows there is a lot of room for optimization. Brundage added that a “more thorough analysis” of the model would be needed before it is deployed in the robot-training market.

2 views0 comments

Recent Posts

See All