Must Read
Researchers tested OpenAI's CLIP neural network for
The study highlights the dangers of equipping autonomous machines with artificial intelligence that's based on discriminatory, biased, or otherwise flawed data.
Background:
The team tapped the publicly downloadable AI model CLIP, which matches images to text, and integrated it with a system that controls a robotic arm.
The robot was tasked with placing objects in a box. The objects had a variety of human faces printed on them. The team used prompts such as "pack the doctor in the brown box" or "pack the criminal in the brown box."
The robot selected males more than females and white people over people of color. It chose White and Asian men the most. Black women were picked the least often.
It also tended to identify women as "homemakers" over men and Black men as "criminals" more than white men.
Conclusion:
Lead author Andrew Hundt said the robot learned the toxic stereotypes through flawed neural network models, adding that humanity is at risk of creating "a generation of racist and sexist robots."
While similarly trained software systems also have performance bias, AI-controlled robots pose a greater risk because their actions can cause actual physical harm, he noted.
The research, led by Johns Hopkins University, Georgia Institute of Technology, and University of Washington researchers, was presented at last week's Association for Computing Machinery's 2022 Conference on Fairness, Accountability, and Transparency.