26.02.2021

Humans are trying to take bias out of facial recognition programs. It’s not working–yet

Facial recognition technology tends to make mistakes in identifying people of color. One significant reason for that is likely the lack of diversity in the datasets that underpin computer vision, the field of artificial intelligence that trains computers to interact with the visual world. Researchers are working to remedy that by providing computer vision algorithms with datasets that represent all groups equally and fairly. 

But even that process may be perpetuating biases, according to new research by Zaid Khan, a Ph.D. student in computer engineering at Northeastern University, and his advisor Raymond Fu, professor with the College of Engineering and the Khoury College of Computer Sciences at Northeastern University.

Read summary

View and download paper

Ein Service des deutschen Präventionstages.
www.praeventionstag.de