- EWC Community
Discriminatory AI Robots Reveal Sexist and Racist Prejudice in Algorithms
By: Claire Cao
As major technology companies accelerate development in AI technology, experts conducted a study that uncovers underlying preconceptions in computer codes. This raises major concerns about the effects of robots in society and whether such advanced technology will be able to integrate into communities at all.
A study conducted by professionals at John Hopkins University and the Georgia Institute of Technology tested a robot’s ability to scan a face and categorize a person by their race and profession.
For one of the experiments, the AI robot repeatedly categorized black people as “criminals.” In addition, robots responded to professions such as “homemaker” and “janitor” with women and people of color.
Other researchers have also found racist patterns in AI algorithms regarding criminal cases. Crime prediction code often unfairly targets innocent Black and Latinx people. Repeatedly, robots have had difficulties accurately identifying a person of color.
Currently, many companies are increasingly incorporating and depending on robots to replace human workers. Robots are being used to stock shelves, deliver goods, or even care for patients in the hospital. While our modern world shifts towards adopting complicated codes as part of our daily lives, experts are doing their best to ensure that technology is being used for the better rather than the worse.
As technology becomes ubiquitous, AI ethicists are warning us of unpredictable consequences if we do not address bias in algorithms as soon as possible.
Even though robots are only being utilized for simple tasks, for the time being, their flawed code is already having an impact on people. In the example given by Andrew Hundt, a postdoctoral fellow from the Georgia Institute of Technology and lead researcher on the study, robots who are stocking shelves may favor products featuring or made by white people rather than a product produced by a person of color due to a prejudiced code.
Another researcher, Vicky Zeng from John Hopkins University raised concern about at-home robots being asked to fetch a “beautiful” doll and returning with a white doll. This is extremely problematic because it promotes inequitable beauty standards.
While programming completely unbiased robots may be next to impossible, that does not mean that companies should give up. Algorithms must be closely monitored by AI ethicists, experts, and companies to refine AI codes to remove as much bias as possible for a peaceful transition to technological dependence.