Site icon Wonderful Engineering

Scientists Have Created A “Deliberately” Biased AI That Judges Your Face

Scientists Have Created “Deliberately” Biased AI That Has Judgments About Your Face

Many psychological investigations have proved the biased nature of our judgements and decisions. When meeting new people, in particular, humans frequently make a series judgements based purely on their look, facial characteristics, ethnicity, body type, and body language.

Using deep neural networks, researchers at Princeton University, Stevens Institute of Technology, and the University of Chicago’s Booth School of Business have attempted to predict some of the basic judgments that individuals make about others based just on their faces.

Their work, published in PNAS, provides a machine learning approach that can precisely predict the arbitrary judgements people would make about individual photos of faces.

“Our dataset not only contains bias, but it deliberately reflects it,” Princeton computer science postdoctoral researcher Joshua Peterson stated in a Twitter thread discussing the study.

https://twitter.com/joshuacpeterson/status/1517224879136796672?s=20&t=xlvDjcK8ruL4F_aWcGo8Dg

“As psychologists, we are interested in how people perceive and judge faces, especially when there are important consequences, such as hiring and sentencing decisions involved,” Joshua Peterson, co-researcher of the study, said.

“However, most works up to now was limited to studying artificial 3D face renderings or small sets of photographs.”

Peterson and his colleagues asked thousands of individuals to rank over 1,000 computer-generated photographs of faces based on qualities such as how brilliant, electable, religious, trustworthy, or extroverted the picture’s subject looked to be. Following that, the replies were used to train a neural network to make similar rapid assessments about people based purely on photos of their faces.

“Given a photo of your face, we can use this algorithm to predict people’s first impressions of you and which stereotypes they would project onto you when they see your face,” Suchow explained.

Peterson highlighted that most of the algorithm’s findings align with common intuitions or cultural/ political assumptions. For instance, people who smile tend to be more trustworthy, while people with glasses tend to be more intelligent. However, it’s a little harder to precisely understand why the algorithm attributes a particular trait to a person in other cases.

While AI techniques are already being used to make “deepfake” videos depicting events that never occurred, the new algorithm has the potential to discreetly change real photographs to affect the viewer’s perspective on their subjects.

“With the technology, it is possible to take a photo and create a modified version designed to give off a certain impression,” cognitive scientist and AI researcher Jordan W. Suchow of the Stevens Institute of Technology and co-author of the study said.

“For obvious reasons, we need to be careful about how this technology is used.”

Therefore, to protect its technology, the study team obtained a patent and is currently launching a firm to licence the algorithm for pre-approved ethical objectives.

“We’re taking all the steps we can to ensure this won’t be used to do harm,” Suchow said.

Exit mobile version